258 research outputs found
ThumbNet: One Thumbnail Image Contains All You Need for Recognition
Although deep convolutional neural networks (CNNs) have achieved great
success in computer vision tasks, its real-world application is still impeded
by its voracious demand of computational resources. Current works mostly seek
to compress the network by reducing its parameters or parameter-incurred
computation, neglecting the influence of the input image on the system
complexity. Based on the fact that input images of a CNN contain substantial
redundancy, in this paper, we propose a unified framework, dubbed as ThumbNet,
to simultaneously accelerate and compress CNN models by enabling them to infer
on one thumbnail image. We provide three effective strategies to train
ThumbNet. In doing so, ThumbNet learns an inference network that performs
equally well on small images as the original-input network on large images.
With ThumbNet, not only do we obtain the thumbnail-input inference network that
can drastically reduce computation and memory requirements, but also we obtain
an image downscaler that can generate thumbnail images for generic
classification tasks. Extensive experiments show the effectiveness of ThumbNet,
and demonstrate that the thumbnail-input inference network learned by ThumbNet
can adequately retain the accuracy of the original-input network even when the
input images are downscaled 16 times
Demystifying the Adversarial Robustness of Random Transformation Defenses
Neural networks' lack of robustness against attacks raises concerns in
security-sensitive settings such as autonomous vehicles. While many
countermeasures may look promising, only a few withstand rigorous evaluation.
Defenses using random transformations (RT) have shown impressive results,
particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of
defense has not been rigorously evaluated, leaving its robustness properties
poorly understood. Their stochastic properties make evaluation more challenging
and render many proposed attacks on deterministic models inapplicable. First,
we show that the BPDA attack (Athalye et al., 2018a) used in BaRT's evaluation
is ineffective and likely overestimates its robustness. We then attempt to
construct the strongest possible RT defense through the informed selection of
transformations and Bayesian optimization for tuning their parameters.
Furthermore, we create the strongest possible attack to evaluate our RT
defense. Our new attack vastly outperforms the baseline, reducing the accuracy
by 83% compared to the 19% reduction by the commonly used EoT attack
( improvement). Our result indicates that the RT defense on the
Imagenette dataset (a ten-class subset of ImageNet) is not robust against
adversarial examples. Extending the study further, we use our new attack to
adversarially train RT defense (called AdvRT), resulting in a large robustness
gain. Code is available at
https://github.com/wagner-group/demystify-random-transform.Comment: ICML 2022 (short presentation), AAAI 2022 AdvML Workshop (best paper,
oral presentation
- …