1,960 research outputs found
An SMT-Based Approach for Verifying Binarized Neural Networks
Deep learning has emerged as an effective approach for creating modern
software systems, with neural networks often surpassing hand-crafted systems.
Unfortunately, neural networks are known to suffer from various safety and
security issues. Formal verification is a promising avenue for tackling this
difficulty, by formally certifying that networks are correct. We propose an
SMT-based technique for verifying Binarized Neural Networks - a popular kind of
neural network, where some weights have been binarized in order to render the
neural network more memory and energy efficient, and quicker to evaluate. One
novelty of our technique is that it allows the verification of neural networks
that include both binarized and non-binarized components. Neural network
verification is computationally very difficult, and so we propose here various
optimizations, integrated into our SMT procedure as deduction steps, as well as
an approach for parallelizing verification queries. We implement our technique
as an extension to the Marabou framework, and use it to evaluate the approach
on popular binarized neural network architectures.Comment: This is a preprint version of a paper that will appear at TACAS 202
Efficient Exact Verification of Binarized Neural Networks
We present a new system, EEV, for verifying binarized neural networks (BNNs).
We formulate BNN verification as a Boolean satisfiability problem (SAT) with
reified cardinality constraints of the form ,
where and are Boolean variables possibly with negation and is an
integer constant. We also identify two properties, specifically balanced weight
sparsity and lower cardinality bounds, that reduce the verification complexity
of BNNs. EEV contains both a SAT solver enhanced to handle reified cardinality
constraints natively and novel training strategies designed to reduce
verification complexity by delivering networks with improved sparsity
properties and cardinality bounds. We demonstrate the effectiveness of EEV by
presenting the first exact verification results for -bounded
adversarial robustness of nontrivial convolutional BNNs on the MNIST and
CIFAR10 datasets. Our results also show that, depending on the dataset and
network architecture, our techniques verify BNNs between a factor of ten to ten
thousand times faster than the best previous exact verification techniques for
either binarized or real-valued networks
Automated Verification of Neural Networks: Advances, Challenges and Perspectives
Neural networks are one of the most investigated and widely used techniques
in Machine Learning. In spite of their success, they still find limited
application in safety- and security-related contexts, wherein assurance about
networks' performances must be provided. In the recent past, automated
reasoning techniques have been proposed by several researchers to close the gap
between neural networks and applications requiring formal guarantees about
their behavior. In this work, we propose a primer of such techniques and a
comprehensive categorization of existing approaches for the automated
verification of neural networks. A discussion about current limitations and
directions for future investigation is provided to foster research on this
topic at the crossroads of Machine Learning and Automated Reasoning
Verifying Properties of Binarized Deep Neural Networks
Understanding properties of deep neural networks is an important challenge in
deep learning. In this paper, we take a step in this direction by proposing a
rigorous way of verifying properties of a popular class of neural networks,
Binarized Neural Networks, using the well-developed means of Boolean
satisfiability. Our main contribution is a construction that creates a
representation of a binarized neural network as a Boolean formula. Our encoding
is the first exact Boolean representation of a deep neural network. Using this
encoding, we leverage the power of modern SAT solvers along with a proposed
counterexample-guided search procedure to verify various properties of these
networks. A particular focus will be on the critical property of robustness to
adversarial perturbations. For this property, our experimental results
demonstrate that our approach scales to medium-size deep neural networks used
in image classification tasks. To the best of our knowledge, this is the first
work on verifying properties of deep neural networks using an exact Boolean
encoding of the network.Comment: 10 page
Combinatorial Attacks on Binarized Neural Networks
Binarized Neural Networks (BNNs) have recently attracted significant interest
due to their computational efficiency. Concurrently, it has been shown that
neural networks may be overly sensitive to "attacks" - tiny adversarial changes
in the input - which may be detrimental to their use in safety-critical
domains. Designing attack algorithms that effectively fool trained models is a
key step towards learning robust neural networks. The discrete,
non-differentiable nature of BNNs, which distinguishes them from their
full-precision counterparts, poses a challenge to gradient-based attacks. In
this work, we study the problem of attacking a BNN through the lens of
combinatorial and integer optimization. We propose a Mixed Integer Linear
Programming (MILP) formulation of the problem. While exact and flexible, the
MILP quickly becomes intractable as the network and perturbation space grow. To
address this issue, we propose IProp, a decomposition-based algorithm that
solves a sequence of much smaller MILP problems. Experimentally, we evaluate
both proposed methods against the standard gradient-based attack (FGSM) on
MNIST and Fashion-MNIST, and show that IProp performs favorably compared to
FGSM, while scaling beyond the limits of the MILP
NullaNet: Training Deep Neural Networks for Reduced-Memory-Access Inference
Deep neural networks have been successfully deployed in a wide variety of
applications including computer vision and speech recognition. However,
computational and storage complexity of these models has forced the majority of
computations to be performed on high-end computing platforms or on the cloud.
To cope with computational and storage complexity of these models, this paper
presents a training method that enables a radically different approach for
realization of deep neural networks through Boolean logic minimization. The
aforementioned realization completely removes the energy-hungry step of
accessing memory for obtaining model parameters, consumes about two orders of
magnitude fewer computing resources compared to realizations that use
floatingpoint operations, and has a substantially lower latency
PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence
This work explores the binarization of the deconvolution-based generator in a
GAN for memory saving and speedup of image construction. Our study suggests
that different from convolutional neural networks (including the discriminator)
where all layers can be binarized, only some of the layers in the generator can
be binarized without significant performance loss. Supported by theoretical
analysis and verified by experiments, a direct metric based on the dimension of
deconvolution operations is established, which can be used to quickly decide
which layers in the generator can be binarized. Our results also indicate that
both the generator and the discriminator should be binarized simultaneously for
balanced competition and better performance. Experimental results based on
CelebA suggest that directly applying state-of-the-art binarization techniques
to all the layers of the generator will lead to 2.83 performance loss
measured by sliced Wasserstein distance compared with the original generator,
while applying them to selected layers only can yield up to 25.81
saving in memory consumption, and 1.96 and 1.32 speedup in
inference and training respectively with little performance loss.Comment: 17 pages, paper re-organize
Minutiae Extraction from Fingerprint Images - a Review
Fingerprints are the oldest and most widely used form of biometric
identification. Everyone is known to have unique, immutable fingerprints. As
most Automatic Fingerprint Recognition Systems are based on local ridge
features known as minutiae, marking minutiae accurately and rejecting false
ones is very important. However, fingerprint images get degraded and corrupted
due to variations in skin and impression conditions. Thus, image enhancement
techniques are employed prior to minutiae extraction. A critical step in
automatic fingerprint matching is to reliably extract minutiae from the input
fingerprint images. This paper presents a review of a large number of
techniques present in the literature for extracting fingerprint minutiae. The
techniques are broadly classified as those working on binarized images and
those that work on gray scale images directly.Comment: 12 pages; IJCSI International Journal of Computer Science Issues,
Vol. 8, Issue 5, September 201
BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks
Verifying and explaining the behavior of neural networks is becoming
increasingly important, especially when they are deployed in safety-critical
applications. In this paper, we study verification problems for Binarized
Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural
networks. Our approach is to encode BNNs into Binary Decision Diagrams (BDDs),
which is done by exploiting the internal structure of the BNNs. In particular,
we translate the input-output relation of blocks in BNNs to cardinality
constraints which are then encoded by BDDs. Based on the encoding, we develop a
quantitative verification framework for BNNs where precise and comprehensive
analysis of BNNs can be performed. We demonstrate the application of our
framework by providing quantitative robustness analysis and interpretability
for BNNs. We implement a prototype tool BDD4BNN and carry out extensive
experiments which confirm the effectiveness and efficiency of our approach
Formal methods and software engineering for DL. Security, safety and productivity for DL systems development
Deep Learning (DL) techniques are now widespread and being integrated into
many important systems. Their classification and recognition abilities ensure
their relevance for multiple application domains. As machine-learning that
relies on training instead of algorithm programming, they offer a high degree
of productivity. But they can be vulnerable to attacks and the verification of
their correctness is only just emerging as a scientific and engineering
possibility. This paper is a major update of a previously-published survey,
attempting to cover all recent publications in this area. It also covers an
even more recent trend, namely the design of domain-specific languages for
producing and training neural nets.Comment: Submitted to IEEE-CCECE201
- …