8,275 research outputs found
Defense against Universal Adversarial Perturbations
Recent advances in Deep Learning show the existence of image-agnostic
quasi-imperceptible perturbations that when applied to `any' image can fool a
state-of-the-art network classifier to change its prediction about the image
label. These `Universal Adversarial Perturbations' pose a serious threat to the
success of Deep Learning in practice. We present the first dedicated framework
to effectively defend the networks against such perturbations. Our approach
learns a Perturbation Rectifying Network (PRN) as `pre-input' layers to a
targeted model, such that the targeted model needs no modification. The PRN is
learned from real and synthetic image-agnostic perturbations, where an
efficient method to compute the latter is also proposed. A perturbation
detector is separately trained on the Discrete Cosine Transform of the
input-output difference of the PRN. A query image is first passed through the
PRN and verified by the detector. If a perturbation is detected, the output of
the PRN is used for label prediction instead of the actual image. A rigorous
evaluation shows that our framework can defend the network classifiers against
unseen adversarial perturbations in the real-world scenarios with up to 97.5%
success rate. The PRN also generalizes well in the sense that training for one
targeted network defends another network with a comparable success rate.Comment: Accepted in IEEE CVPR 201
Action Classification with Locality-constrained Linear Coding
We propose an action classification algorithm which uses Locality-constrained
Linear Coding (LLC) to capture discriminative information of human body
variations in each spatiotemporal subsequence of a video sequence. Our proposed
method divides the input video into equally spaced overlapping spatiotemporal
subsequences, each of which is decomposed into blocks and then cells. We use
the Histogram of Oriented Gradient (HOG3D) feature to encode the information in
each cell. We justify the use of LLC for encoding the block descriptor by
demonstrating its superiority over Sparse Coding (SC). Our sequence descriptor
is obtained via a logistic regression classifier with L2 regularization. We
evaluate and compare our algorithm with ten state-of-the-art algorithms on five
benchmark datasets. Experimental results show that, on average, our algorithm
gives better accuracy than these ten algorithms.Comment: ICPR 201
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/code
- …
