320 research outputs found
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification
Person re-identification (re-id) aims to match pedestrians observed by
disjoint camera views. It attracts increasing attention in computer vision due
to its importance to surveillance system. To combat the major challenge of
cross-view visual variations, deep embedding approaches are proposed by
learning a compact feature space from images such that the Euclidean distances
correspond to their cross-view similarity metric. However, the global Euclidean
distance cannot faithfully characterize the ideal similarity in a complex
visual feature space because features of pedestrian images exhibit unknown
distributions due to large variations in poses, illumination and occlusion.
Moreover, intra-personal training samples within a local range are robust to
guide deep embedding against uncontrolled variations, which however, cannot be
captured by a global Euclidean distance. In this paper, we study the problem of
person re-id by proposing a novel sampling to mine suitable \textit{positives}
(i.e. intra-class) within a local range to improve the deep embedding in the
context of large intra-class variations. Our method is capable of learning a
deep similarity metric adaptive to local sample structure by minimizing each
sample's local distances while propagating through the relationship between
samples to attain the whole intra-class minimization. To this end, a novel
objective function is proposed to jointly optimize similarity metric learning,
local positive mining and robust deep embedding. This yields local
discriminations by selecting local-ranged positive samples, and the learned
features are robust to dramatic intra-class variations. Experiments on
benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio
Fully Dynamic Inference with Deep Neural Networks
Modern deep neural networks are powerful and widely applicable models that
extract task-relevant information through multi-level abstraction. Their
cross-domain success, however, is often achieved at the expense of
computational cost, high memory bandwidth, and long inference latency, which
prevents their deployment in resource-constrained and time-sensitive scenarios,
such as edge-side inference and self-driving cars. While recently developed
methods for creating efficient deep neural networks are making their real-world
deployment more feasible by reducing model size, they do not fully exploit
input properties on a per-instance basis to maximize computational efficiency
and task accuracy. In particular, most existing methods typically use a
one-size-fits-all approach that identically processes all inputs. Motivated by
the fact that different images require different feature embeddings to be
accurately classified, we propose a fully dynamic paradigm that imparts deep
convolutional neural networks with hierarchical inference dynamics at the level
of layers and individual convolutional filters/channels. Two compact networks,
called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance
basis which layers or filters/channels are redundant and therefore should be
skipped. L-Net and C-Net also learn how to scale retained computation outputs
to maximize task accuracy. By integrating L-Net and C-Net into a joint design
framework, called LC-Net, we consistently outperform state-of-the-art dynamic
frameworks with respect to both efficiency and classification accuracy. On the
CIFAR-10 dataset, LC-Net results in up to 11.9 fewer floating-point
operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic
inference methods. On the ImageNet dataset, LC-Net achieves up to 1.4
fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods
Deep Contrast Learning for Salient Object Detection
Salient object detection has recently witnessed substantial progress due to
powerful features extracted using deep convolutional neural networks (CNNs).
However, existing CNN-based methods operate at the patch level instead of the
pixel level. Resulting saliency maps are typically blurry, especially near the
boundary of salient objects. Furthermore, image patches are treated as
independent samples even when they are overlapping, giving rise to significant
redundancy in computation and storage. In this CVPR 2016 paper, we propose an
end-to-end deep contrast network to overcome the aforementioned limitations.
Our deep network consists of two complementary components, a pixel-level fully
convolutional stream and a segment-wise spatial pooling stream. The first
stream directly produces a saliency map with pixel-level accuracy from an input
image. The second stream extracts segment-wise features very efficiently, and
better models saliency discontinuities along object boundaries. Finally, a
fully connected CRF model can be optionally incorporated to improve spatial
coherence and contour localization in the fused result from these two streams.
Experimental results demonstrate that our deep model significantly improves the
state of the art.Comment: To appear in CVPR 201
- …