718 research outputs found
Towards Fast Computation of Certified Robustness for ReLU Networks
Verifying the robustness property of a general Rectified Linear Unit (ReLU)
network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer
CAV17]. Although finding the exact minimum adversarial distortion is hard,
giving a certified lower bound of the minimum distortion is possible. Current
available methods of computing such a bound are either time-consuming or
delivering low quality bounds that are too loose to be useful. In this paper,
we exploit the special structure of ReLU networks and provide two
computationally efficient algorithms Fast-Lin and Fast-Lip that are able to
certify non-trivial lower bounds of minimum distortions, by bounding the ReLU
units with appropriate linear functions Fast-Lin, or by bounding the local
Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods
deliver bounds close to (the gap is 2-3X) exact minimum distortion found by
Reluplex in small MNIST networks while our algorithms are more than 10,000
times faster; (2) our methods deliver similar quality of bounds (the gap is
within 35% and usually around 10%; sometimes our bounds are even better) for
larger networks compared to the methods based on solving linear programming
problems but our algorithms are 33-14,000 times faster; (3) our method is
capable of solving large MNIST and CIFAR networks up to 7 layers with more than
10,000 neurons within tens of seconds on a single CPU core.
In addition, we show that, in fact, there is no polynomial time algorithm
that can approximately find the minimum adversarial distortion of a
ReLU network with a approximation ratio unless
=, where is the number of neurons in the network.Comment: Tsui-Wei Weng and Huan Zhang contributed equall
Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
As machine learning models are increasingly being employed to make
consequential decisions in real-world settings, it becomes critical to ensure
that individuals who are adversely impacted (e.g., loan denied) by the
predictions of these models are provided with a means for recourse. While
several approaches have been proposed to construct recourses for affected
individuals, the recourses output by these methods either achieve low costs
(i.e., ease-of-implementation) or robustness to small perturbations (i.e.,
noisy implementations of recourses), but not both due to the inherent
trade-offs between the recourse costs and robustness. Furthermore, prior
approaches do not provide end users with any agency over navigating the
aforementioned trade-offs. In this work, we address the above challenges by
proposing the first algorithmic framework which enables users to effectively
manage the recourse cost vs. robustness trade-offs. More specifically, our
framework Probabilistically ROBust rEcourse (\texttt{PROBE}) lets users choose
the probability with which a recourse could get invalidated (recourse
invalidation rate) if small changes are made to the recourse i.e., the recourse
is implemented somewhat noisily. To this end, we propose a novel objective
function which simultaneously minimizes the gap between the achieved
(resulting) and desired recourse invalidation rates, minimizes recourse costs,
and also ensures that the resulting recourse achieves a positive model
prediction. We develop novel theoretical results to characterize the recourse
invalidation rates corresponding to any given instance w.r.t. different classes
of underlying models (e.g., linear models, tree based models etc.), and
leverage these results to efficiently optimize the proposed objective.
Experimental evaluation with multiple real world datasets demonstrate the
efficacy of the proposed framework
- …