127 research outputs found
Spatial-Aware Object Embeddings for Zero-Shot Localization and Classification of Actions
We aim for zero-shot localization and classification of human actions in
video. Where traditional approaches rely on global attribute or object
classification scores for their zero-shot knowledge transfer, our main
contribution is a spatial-aware object embedding. To arrive at spatial
awareness, we build our embedding on top of freely available actor and object
detectors. Relevance of objects is determined in a word embedding space and
further enforced with estimated spatial preferences. Besides local object
awareness, we also embed global object awareness into our embedding to maximize
actor and object interaction. Finally, we exploit the object positions and
sizes in the spatial-aware embedding to demonstrate a new spatio-temporal
action retrieval scenario with composite queries. Action localization and
classification experiments on four contemporary action video datasets support
our proposal. Apart from state-of-the-art results in the zero-shot localization
and classification settings, our spatial-aware embedding is even competitive
with recent supervised action localization alternatives.Comment: ICC
Video Time: Properties, Encoders and Evaluation
Time-aware encoding of frame sequences in a video is a fundamental problem in
video understanding. While many attempted to model time in videos, an explicit
study on quantifying video time is missing. To fill this lacuna, we aim to
evaluate video time explicitly. We describe three properties of video time,
namely a) temporal asymmetry, b)temporal continuity and c) temporal causality.
Based on each we formulate a task able to quantify the associated property.
This allows assessing the effectiveness of modern video encoders, like C3D and
LSTM, in their ability to model time. Our analysis provides insights about
existing encoders while also leading us to propose a new video time encoder,
which is better suited for the video time recognition tasks than C3D and LSTM.
We believe the proposed meta-analysis can provide a reasonable baseline to
assess video time encoders on equal grounds on a set of temporal-aware tasks.Comment: 14 pages, BMVC 201
Video Stream Retrieval of Unseen Queries using Semantic Memory
Retrieval of live, user-broadcast video streams is an under-addressed and
increasingly relevant challenge. The on-line nature of the problem requires
temporal evaluation and the unforeseeable scope of potential queries motivates
an approach which can accommodate arbitrary search queries. To account for the
breadth of possible queries, we adopt a no-example approach to query retrieval,
which uses a query's semantic relatedness to pre-trained concept classifiers.
To adapt to shifting video content, we propose memory pooling and memory
welling methods that favor recent information over long past content. We
identify two stream retrieval tasks, instantaneous retrieval at any particular
time and continuous retrieval over a prolonged duration, and propose means for
evaluating them. Three large scale video datasets are adapted to the challenge
of stream retrieval. We report results for our search methods on the new stream
retrieval tasks, as well as demonstrate their efficacy in a traditional,
non-streaming video task.Comment: Presented at BMVC 2016, British Machine Vision Conference, 201
Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres
Many computer vision challenges require continuous outputs, but tend to be
solved by discrete classification. The reason is classification's natural
containment within a probability -simplex, as defined by the popular softmax
activation function. Regular regression lacks such a closed geometry, leading
to unstable training and convergence to suboptimal local minima. Starting from
this insight we revisit regression in convolutional neural networks. We observe
many continuous output problems in computer vision are naturally contained in
closed geometrical manifolds, like the Euler angles in viewpoint estimation or
the normals in surface normal estimation. A natural framework for posing such
continuous output problems are -spheres, which are naturally closed
geometric manifolds defined in the space. By introducing a
spherical exponential mapping on -spheres at the regression output, we
obtain well-behaved gradients, leading to stable training. We show how our
spherical regression can be utilized for several computer vision challenges,
specifically viewpoint estimation, surface normal estimation and 3D rotation
estimation. For all these problems our experiments demonstrate the benefit of
spherical regression. All paper resources are available at
https://github.com/leoshine/Spherical_Regression.Comment: CVPR 2019 camera read
Counting with Focus for Free
This paper aims to count arbitrary objects in images. The leading counting
approaches start from point annotations per object from which they construct
density maps. Then, their training objective transforms input images to density
maps through deep convolutional networks. We posit that the point annotations
serve more supervision purposes than just constructing density maps. We
introduce ways to repurpose the points for free. First, we propose supervised
focus from segmentation, where points are converted into binary maps. The
binary maps are combined with a network branch and accompanying loss function
to focus on areas of interest. Second, we propose supervised focus from global
density, where the ratio of point annotations to image pixels is used in
another branch to regularize the overall density estimation. To assist both the
density estimation and the focus from segmentation, we also introduce an
improved kernel size estimator for the point annotations. Experiments on six
datasets show that all our contributions reduce the counting error, regardless
of the base network, resulting in state-of-the-art accuracy using only a single
network. Finally, we are the first to count on WIDER FACE, allowing us to show
the benefits of our approach in handling varying object scales and crowding
levels. Code is available at
https://github.com/shizenglin/Counting-with-Focus-for-FreeComment: ICCV, 201
No Spare Parts: Sharing Part Detectors for Image Categorization
This work aims for image categorization using a representation of distinctive
parts. Different from existing part-based work, we argue that parts are
naturally shared between image categories and should be modeled as such. We
motivate our approach with a quantitative and qualitative analysis by
backtracking where selected parts come from. Our analysis shows that in
addition to the category parts defining the class, the parts coming from the
background context and parts from other image categories improve categorization
performance. Part selection should not be done separately for each category,
but instead be shared and optimized over all categories. To incorporate part
sharing between categories, we present an algorithm based on AdaBoost to
jointly optimize part sharing and selection, as well as fusion with the global
image representation. We achieve results competitive to the state-of-the-art on
object, scene, and action categories, further improving over deep convolutional
neural networks
- …