6,976 research outputs found
Deep Motion Features for Visual Tracking
Robust visual tracking is a challenging computer vision problem, with many
real-world applications. Most existing approaches employ hand-crafted
appearance features, such as HOG or Color Names. Recently, deep RGB features
extracted from convolutional neural networks have been successfully applied for
tracking. Despite their success, these features only capture appearance
information. On the other hand, motion cues provide discriminative and
complementary information that can improve tracking performance. Contrary to
visual tracking, deep motion features have been successfully applied for action
recognition and video classification tasks. Typically, the motion features are
learned by training a CNN on optical flow images extracted from large amounts
of labeled videos.
This paper presents an investigation of the impact of deep motion features in
a tracking-by-detection framework. We further show that hand-crafted, deep RGB,
and deep motion features contain complementary information. To the best of our
knowledge, we are the first to propose fusing appearance information with deep
motion features for visual tracking. Comprehensive experiments clearly suggest
that our fusion approach with deep motion features outperforms standard methods
relying on appearance information alone.Comment: ICPR 2016. Best paper award in the "Computer Vision and Robot Vision"
trac
Deep Multimodal Speaker Naming
Automatic speaker naming is the problem of localizing as well as identifying
each speaking character in a TV/movie/live show video. This is a challenging
problem mainly attributes to its multimodal nature, namely face cue alone is
insufficient to achieve good performance. Previous multimodal approaches to
this problem usually process the data of different modalities individually and
merge them using handcrafted heuristics. Such approaches work well for simple
scenes, but fail to achieve high performance for speakers with large appearance
variations. In this paper, we propose a novel convolutional neural networks
(CNN) based learning framework to automatically learn the fusion function of
both face and audio cues. We show that without using face tracking, facial
landmark localization or subtitle/transcript, our system with robust multimodal
feature extraction is able to achieve state-of-the-art speaker naming
performance evaluated on two diverse TV series. The dataset and implementation
of our algorithm are publicly available online
Learning to rank in person re-identification with metric ensembles
We propose an effective structured learning based approach to the problem of
person re-identification which outperforms the current state-of-the-art on most
benchmark data sets evaluated. Our framework is built on the basis of multiple
low-level hand-crafted and high-level visual features. We then formulate two
optimization algorithms, which directly optimize evaluation measures commonly
used in person re-identification, also known as the Cumulative Matching
Characteristic (CMC) curve. Our new approach is practical to many real-world
surveillance applications as the re-identification performance can be
concentrated in the range of most practical importance. The combination of
these factors leads to a person re-identification system which outperforms most
existing algorithms. More importantly, we advance state-of-the-art results on
person re-identification by improving the rank- recognition rates from
to on the iLIDS benchmark, to on the PRID2011
benchmark, to on the VIPeR benchmark, to on the
CUHK01 benchmark and to on the CUHK03 benchmark.Comment: 10 page
- …