939 research outputs found
A Provable Defense for Deep Residual Networks
We present a training system, which can provably defend significantly larger
neural networks than previously possible, including ResNet-34 and DenseNet-100.
Our approach is based on differentiable abstract interpretation and introduces
two novel concepts: (i) abstract layers for fine-tuning the precision and
scalability of the abstraction, (ii) a flexible domain specific language (DSL)
for describing training objectives that combine abstract and concrete losses
with arbitrary specifications. Our training method is implemented in the DiffAI
system
Robustness Certification for Point Cloud Models
The use of deep 3D point cloud models in safety-critical applications, such
as autonomous driving, dictates the need to certify the robustness of these
models to real-world transformations. This is technically challenging, as it
requires a scalable verifier tailored to point cloud models that handles a wide
range of semantic 3D transformations. In this work, we address this challenge
and introduce 3DCertify, the first verifier able to certify the robustness of
point cloud models. 3DCertify is based on two key insights: (i) a generic
relaxation based on first-order Taylor approximations, applicable to any
differentiable transformation, and (ii) a precise relaxation for global feature
pooling, which is more complex than pointwise activations (e.g., ReLU or
sigmoid) but commonly employed in point cloud models. We demonstrate the
effectiveness of 3DCertify by performing an extensive evaluation on a wide
range of 3D transformations (e.g., rotation, twisting) for both classification
and part segmentation tasks. For example, we can certify robustness against
rotations by 60{\deg} for 95.7% of point clouds, and our max pool
relaxation increases certification by up to 15.6%.Comment: International Conference on Computer Vision (ICCV) 202
- …