213 research outputs found
Recommended from our members
Train and test tightness of LP relaxations in structured prediction
This is the author accepted manuscript. The final version is available from Microtome Publishing via http://www.jmlr.org/proceedings/papers/v48/meshi16.htmlStructured prediction is used in areas such as computer vision and natural language processing to predict structured outputs such as segmentations or parse trees. In these settings, prediction is performed by MAP inference or, equivalently, by solving an integer linear program. Because of the complex scoring functions required to obtain accurate predictions, both learning and inference typically require the use of approximate solvers. We propose a theoretical explanation to the striking observation that approximations based on linear programming (LP) relaxations are often tight on real-world instances. In particular, we show that learning with LP relaxed inference encourages integrality of training instances, and that tightness generalizes from train to test data
SoK: Certified Robustness for Deep Neural Networks
Great advances in deep neural networks (DNNs) have led to state-of-the-art
performance on a wide range of tasks. However, recent studies have shown that
DNNs are vulnerable to adversarial attacks, which have brought great concerns
when deploying these models to safety-critical applications such as autonomous
driving. Different defense approaches have been proposed against adversarial
attacks, including: a) empirical defenses, which can usually be adaptively
attacked again without providing robustness certification; and b) certifiably
robust approaches, which consist of robustness verification providing the lower
bound of robust accuracy against any attacks under certain conditions and
corresponding robust training approaches. In this paper, we systematize
certifiably robust approaches and related practical and theoretical
implications and findings. We also provide the first comprehensive benchmark on
existing robustness verification and training approaches on different datasets.
In particular, we 1) provide a taxonomy for the robustness verification and
training approaches, as well as summarize the methodologies for representative
algorithms, 2) reveal the characteristics, strengths, limitations, and
fundamental connections among these approaches, 3) discuss current research
progresses, theoretical barriers, main challenges, and future directions for
certifiably robust approaches for DNNs, and 4) provide an open-sourced unified
platform to evaluate 20+ representative certifiably robust approaches.Comment: To appear at 2023 IEEE Symposium on Security and Privacy (SP); 14
pages for the main text; benchmark & tool website:
http://sokcertifiedrobustness.github.io
- …