21 research outputs found
Robustness Analysis of Neural Networks via Efficient Partitioning with Applications in Control Systems
Neural networks (NNs) are now routinely implemented on systems that must
operate in uncertain environments, but the tools for formally analyzing how
this uncertainty propagates to NN outputs are not yet commonplace. Computing
tight bounds on NN output sets (given an input set) provides a measure of
confidence associated with the NN decisions and is essential to deploy NNs on
safety-critical systems. Recent works approximate the propagation of sets
through nonlinear activations or partition the uncertainty set to provide a
guaranteed outer bound on the set of possible NN outputs. However, the bound
looseness causes excessive conservatism and/or the computation is too slow for
online analysis. This paper unifies propagation and partition approaches to
provide a family of robustness analysis algorithms that give tighter bounds
than existing works for the same amount of computation time (or reduced
computational effort for a desired accuracy level). Moreover, we provide new
partitioning techniques that are aware of their current bound estimates and
desired boundary shape (e.g., lower bounds, weighted -ball, convex
hull), leading to further improvements in the computation-tightness tradeoff.
The paper demonstrates the tighter bounds and reduced conservatism of the
proposed robustness analysis framework with examples from model-free RL and
forward kinematics learning
Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty
When using deep neural networks to operate safety-critical systems, assessing
the sensitivity of the network outputs when subject to uncertain inputs is of
paramount importance. Such assessment is commonly done using reachability
analysis or robustness certification. However, certification techniques
typically ignore localization information, while reachable set methods can fail
to issue robustness guarantees. Furthermore, many advanced methods are either
computationally intractable in practice or restricted to very specific models.
In this paper, we develop a data-driven optimization-based method capable of
simultaneously certifying the safety of network outputs and localizing them.
The proposed method provides a unified assessment framework, as it subsumes
state-of-the-art reachability analysis and robustness certification. The method
applies to deep neural networks of all sizes and structures, and to random
input uncertainty with a general distribution. We develop sufficient conditions
for the convexity of the underlying optimization, and for the number of data
samples to certify and localize the outputs with overwhelming probability. We
experimentally demonstrate the efficacy and tractability of the method on a
deep ReLU network
Counterexample-Guided Learning of Monotonic Neural Networks
The widespread adoption of deep learning is often attributed to its automatic
feature construction with minimal inductive bias. However, in many real-world
tasks, the learned function is intended to satisfy domain-specific constraints.
We focus on monotonicity constraints, which are common and require that the
function's output increases with increasing values of specific input features.
We develop a counterexample-guided technique to provably enforce monotonicity
constraints at prediction time. Additionally, we propose a technique to use
monotonicity as an inductive bias for deep learning. It works by iteratively
incorporating monotonicity counterexamples in the learning process. Contrary to
prior work in monotonic learning, we target general ReLU neural networks and do
not further restrict the hypothesis space. We have implemented these techniques
in a tool called COMET. Experiments on real-world datasets demonstrate that our
approach achieves state-of-the-art results compared to existing monotonic
learners, and can improve the model quality compared to those that were trained
without taking monotonicity constraints into account