30,622 research outputs found
Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking
Object-to-camera motion produces a variety of apparent motion patterns that
significantly affect performance of short-term visual trackers. Despite being
crucial for designing robust trackers, their influence is poorly explored in
standard benchmarks due to weakly defined, biased and overlapping attribute
annotations. In this paper we propose to go beyond pre-recorded benchmarks with
post-hoc annotations by presenting an approach that utilizes omnidirectional
videos to generate realistic, consistently annotated, short-term tracking
scenarios with exactly parameterized motion patterns. We have created an
evaluation system, constructed a fully annotated dataset of omnidirectional
videos and the generators for typical motion patterns. We provide an in-depth
analysis of major tracking paradigms which is complementary to the standard
benchmarks and confirms the expressiveness of our evaluation approach
The Secrets of Salient Object Segmentation
In this paper we provide an extensive evaluation of fixation prediction and
salient object segmentation algorithms as well as statistics of major datasets.
Our analysis identifies serious design flaws of existing salient object
benchmarks, called the dataset design bias, by over emphasizing the
stereotypical concepts of saliency. The dataset design bias does not only
create the discomforting disconnection between fixations and salient object
segmentation, but also misleads the algorithm designing. Based on our analysis,
we propose a new high quality dataset that offers both fixation and salient
object segmentation ground-truth. With fixations and salient object being
presented simultaneously, we are able to bridge the gap between fixations and
salient objects, and propose a novel method for salient object segmentation.
Finally, we report significant benchmark progress on three existing datasets of
segmenting salient objectsComment: 15 pages, 8 figures. Conference version was accepted by CVPR 201
Graph Distillation for Action Detection with Privileged Modalities
We propose a technique that tackles action detection in multimodal videos
under a realistic and challenging condition in which only limited training data
and partially observed modalities are available. Common methods in transfer
learning do not take advantage of the extra modalities potentially available in
the source domain. On the other hand, previous work on multimodal learning only
focuses on a single domain or task and does not handle the modality discrepancy
between training and testing. In this work, we propose a method termed graph
distillation that incorporates rich privileged information from a large-scale
multimodal dataset in the source domain, and improves the learning in the
target domain where training data and modalities are scarce. We evaluate our
approach on action classification and detection tasks in multimodal videos, and
show that our model outperforms the state-of-the-art by a large margin on the
NTU RGB+D and PKU-MMD benchmarks. The code is released at
http://alan.vision/eccv18_graph/.Comment: ECCV 201
- …