21 research outputs found
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
In recent years, the notion of local robustness (or robustness for short) has
emerged as a desirable property of deep neural networks. Intuitively,
robustness means that small perturbations to an input do not cause the network
to perform misclassifications. In this paper, we present a novel algorithm for
verifying robustness properties of neural networks. Our method synergistically
combines gradient-based optimization methods for counterexample search with
abstraction-based proof search to obtain a sound and ({\delta}-)complete
decision procedure. Our method also employs a data-driven approach to learn a
verification policy that guides abstract interpretation during proof search. We
have implemented the proposed approach in a tool called Charon and
experimentally evaluated it on hundreds of benchmarks. Our experiments show
that the proposed approach significantly outperforms three state-of-the-art
tools, namely AI^2 , Reluplex, and Reluval
Invariant Synthesis for Incomplete Verification Engines
We propose a framework for synthesizing inductive invariants for incomplete
verification engines, which soundly reduce logical problems in undecidable
theories to decidable theories. Our framework is based on the counter-example
guided inductive synthesis principle (CEGIS) and allows verification engines to
communicate non-provability information to guide invariant synthesis. We show
precisely how the verification engine can compute such non-provability
information and how to build effective learning algorithms when invariants are
expressed as Boolean combinations of a fixed set of predicates. Moreover, we
evaluate our framework in two verification settings, one in which verification
engines need to handle quantified formulas and one in which verification
engines have to reason about heap properties expressed in an expressive but
undecidable separation logic. Our experiments show that our invariant synthesis
framework based on non-provability information can both effectively synthesize
inductive invariants and adequately strengthen contracts across a large suite
of programs
Learning a Static Analyzer from Data
To be practically useful, modern static analyzers must precisely model the
effect of both, statements in the programming language as well as frameworks
used by the program under analysis. While important, manually addressing these
challenges is difficult for at least two reasons: (i) the effects on the
overall analysis can be non-trivial, and (ii) as the size and complexity of
modern libraries increase, so is the number of cases the analysis must handle.
In this paper we present a new, automated approach for creating static
analyzers: instead of manually providing the various inference rules of the
analyzer, the key idea is to learn these rules from a dataset of programs. Our
method consists of two ingredients: (i) a synthesis algorithm capable of
learning a candidate analyzer from a given dataset, and (ii) a counter-example
guided learning procedure which generates new programs beyond those in the
initial dataset, critical for discovering corner cases and ensuring the learned
analysis generalizes to unseen programs.
We implemented and instantiated our approach to the task of learning
JavaScript static analysis rules for a subset of points-to analysis and for
allocation sites analysis. These are challenging yet important problems that
have received significant research attention. We show that our approach is
effective: our system automatically discovered practical and useful inference
rules for many cases that are tricky to manually identify and are missed by
state-of-the-art, manually tuned analyzers
Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks (Extended Version)
Verifying real-world programs often requires inferring loop invariants with
nonlinear constraints. This is especially true in programs that perform many
numerical operations, such as control systems for avionics or industrial
plants. Recently, data-driven methods for loop invariant inference have shown
promise, especially on linear invariants. However, applying data-driven
inference to nonlinear loop invariants is challenging due to the large numbers
of and magnitudes of high-order terms, the potential for overfitting on a small
number of samples, and the large space of possible inequality bounds.
In this paper, we introduce a new neural architecture for general SMT
learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear
loop invariant learning. G-CLNs extend the Continuous Logic Network (CLN)
architecture with gating units and dropout, which allow the model to robustly
learn general invariants over large numbers of terms. To address overfitting
that arises from finite program sampling, we introduce fractional sampling---a
sound relaxation of loop semantics to continuous functions that facilitates
unbounded sampling on real domain. We additionally design a new CLN activation
function, the Piecewise Biased Quadratic Unit (PBQU), for naturally learning
tight inequality bounds.
We incorporate these methods into a nonlinear loop invariant inference system
that can learn general nonlinear loop invariants. We evaluate our system on a
benchmark of nonlinear loop invariants and show it solves 26 out of 27
problems, 3 more than prior work, with an average runtime of 53.3 seconds. We
further demonstrate the generic learning ability of G-CLNs by solving all 124
problems in the linear Code2Inv benchmark. We also perform a quantitative
stability evaluation and show G-CLNs have a convergence rate of on
quadratic problems, a improvement over CLN models
Invariant Synthesis for Incomplete Verification Engines
We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counter-example guided inductive synthesis principle (CEGIS) and allows verification engines to communicate non-provability information to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs
Bridging boolean and quantitative synthesis using smoothed proof search
We present a new technique for parameter synthesis under boolean and quantitative objectives. The input to the technique is a "sketch" --- a program with missing numerical parameters --- and a probabilistic assumption about the program's inputs. The goal is to automatically synthesize values for the parameters such that the resulting program satisfies: (1) a {boolean specification}, which states that the program must meet certain assertions, and (2) a {quantitative specification}, which assigns a real valued rating to every program and which the synthesizer is expected to optimize.
Our method --- called smoothed proof search --- reduces this task to a sequence of unconstrained smooth optimization problems that are then solved numerically. By iteratively solving these problems, we obtain parameter values that get closer and closer to meeting the boolean specification; at the limit, we obtain values that provably meet the specification. The approximations are computed using a new notion of smoothing for program abstractions, where an abstract transformer is approximated by a function that is continuous according to a metric over abstract states.
We present a prototype implementation of our synthesis procedure, and experimental results on two benchmarks from the embedded control domain. The experiments demonstrate the benefits of smoothed proof search over an approach that does not meet the boolean and quantitative synthesis goals simultaneously.National Science Foundation (U.S.) (NSF Award #1162076