65,687 research outputs found
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Due to the increasing usage of machine learning (ML) techniques in security-
and safety-critical domains, such as autonomous systems and medical diagnosis,
ensuring correct behavior of ML systems, especially for different corner cases,
is of growing importance. In this paper, we propose a generic framework for
evaluating security and robustness of ML systems using different real-world
safety properties. We further design, implement and evaluate VeriVis, a
scalable methodology that can verify a diverse set of safety properties for
state-of-the-art computer vision systems with only blackbox access. VeriVis
leverage different input space reduction techniques for efficient verification
of different safety properties. VeriVis is able to find thousands of safety
violations in fifteen state-of-the-art computer vision systems including ten
Deep Neural Networks (DNNs) such as Inception-v3 and Nvidia's Dave self-driving
system with thousands of neurons as well as five commercial third-party vision
APIs including Google vision and Clarifai for twelve different safety
properties. Furthermore, VeriVis can successfully verify local safety
properties, on average, for around 31.7% of the test images. VeriVis finds up
to 64.8x more violations than existing gradient-based methods that, unlike
VeriVis, cannot ensure non-existence of any violations. Finally, we show that
retraining using the safety violations detected by VeriVis can reduce the
average number of violations up to 60.2%.Comment: 16 pages, 11 tables, 11 figure
The SkyMapper Transient Survey
The SkyMapper 1.3 m telescope at Siding Spring Observatory has now begun
regular operations. Alongside the Southern Sky Survey, a comprehensive digital
survey of the entire southern sky, SkyMapper will carry out a search for
supernovae and other transients. The search strategy, covering a total
footprint area of ~2000 deg2 with a cadence of days, is optimised for
discovery and follow-up of low-redshift type Ia supernovae to constrain cosmic
expansion and peculiar velocities. We describe the search operations and
infrastructure, including a parallelised software pipeline to discover variable
objects in difference imaging; simulations of the performance of the survey
over its lifetime; public access to discovered transients; and some first
results from the Science Verification data.Comment: 13 pages, 11 figures; submitted to PAS
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
In recent years, the notion of local robustness (or robustness for short) has
emerged as a desirable property of deep neural networks. Intuitively,
robustness means that small perturbations to an input do not cause the network
to perform misclassifications. In this paper, we present a novel algorithm for
verifying robustness properties of neural networks. Our method synergistically
combines gradient-based optimization methods for counterexample search with
abstraction-based proof search to obtain a sound and ({\delta}-)complete
decision procedure. Our method also employs a data-driven approach to learn a
verification policy that guides abstract interpretation during proof search. We
have implemented the proposed approach in a tool called Charon and
experimentally evaluated it on hundreds of benchmarks. Our experiments show
that the proposed approach significantly outperforms three state-of-the-art
tools, namely AI^2 , Reluplex, and Reluval
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Despite the improved accuracy of deep neural networks, the discovery of
adversarial examples has raised serious safety concerns. Most existing
approaches for crafting adversarial examples necessitate some knowledge
(architecture, parameters, etc.) of the network at hand. In this paper, we
focus on image classifiers and propose a feature-guided black-box approach to
test the safety of deep neural networks that requires no such knowledge. Our
algorithm employs object detection techniques such as SIFT (Scale Invariant
Feature Transform) to extract features from an image. These features are
converted into a mutable saliency distribution, where high probability is
assigned to pixels that affect the composition of the image with respect to the
human visual system. We formulate the crafting of adversarial examples as a
two-player turn-based stochastic game, where the first player's objective is to
minimise the distance to an adversarial example by manipulating the features,
and the second player can be cooperative, adversarial, or random. We show that,
theoretically, the two-player game can con- verge to the optimal strategy, and
that the optimal strategy represents a globally minimal adversarial image. For
Lipschitz networks, we also identify conditions that provide safety guarantees
that no adversarial examples exist. Using Monte Carlo tree search we gradually
explore the game state space to search for adversarial examples. Our
experiments show that, despite the black-box setting, manipulations guided by a
perception-based saliency distribution are competitive with state-of-the-art
methods that rely on white-box saliency matrices or sophisticated optimization
procedures. Finally, we show how our method can be used to evaluate robustness
of neural networks in safety-critical applications such as traffic sign
recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure
- …