112 research outputs found
A Study of Actor and Action Semantic Retention in Video Supervoxel Segmentation
Existing methods in the semantic computer vision community seem unable to
deal with the explosion and richness of modern, open-source and social video
content. Although sophisticated methods such as object detection or
bag-of-words models have been well studied, they typically operate on low level
features and ultimately suffer from either scalability issues or a lack of
semantic meaning. On the other hand, video supervoxel segmentation has recently
been established and applied to large scale data processing, which potentially
serves as an intermediate representation to high level video semantic
extraction. The supervoxels are rich decompositions of the video content: they
capture object shape and motion well. However, it is not yet known if the
supervoxel segmentation retains the semantics of the underlying video content.
In this paper, we conduct a systematic study of how well the actor and action
semantics are retained in video supervoxel segmentation. Our study has human
observers watching supervoxel segmentation videos and trying to discriminate
both actor (human or animal) and action (one of eight everyday actions). We
gather and analyze a large set of 640 human perceptions over 96 videos in 3
different supervoxel scales. Furthermore, we conduct machine recognition
experiments on a feature defined on supervoxel segmentation, called supervoxel
shape context, which is inspired by the higher order processes in human
perception. Our ultimate findings suggest that a significant amount of
semantics have been well retained in the video supervoxel segmentation and can
be used for further video analysis.Comment: This article is in review at the International Journal of Semantic
Computin
Online, Supervised and Unsupervised Action Localization in Videos
Action recognition classifies a given video among a set of action labels, whereas action localization determines the location of an action in addition to its class. The overall aim of this dissertation is action localization. Many of the existing action localization approaches exhaustively search (spatially and temporally) for an action in a video. However, as the search space increases with high resolution and longer duration videos, it becomes impractical to use such sliding window techniques. The first part of this dissertation presents an efficient approach for localizing actions by learning contextual relations between different video regions in training. In testing, we use the context information to estimate the probability of each supervoxel belonging to the foreground action and use Conditional Random Field (CRF) to localize actions. In the above method and typical approaches to this problem, localization is performed in an offline manner where all the video frames are processed together. This prevents timely localization and prediction of actions/interactions - an important consideration for many tasks including surveillance and human-machine interaction. Therefore, in the second part of this dissertation we propose an online approach to the challenging problem of localization and prediction of actions/interactions in videos. In this approach, we use human poses and superpixels in each frame to train discriminative appearance models and perform online prediction of actions/interactions with Structural SVM. Above two approaches rely on human supervision in the form of assigning action class labels to videos and annotating actor bounding boxes in each frame of training videos. Therefore, in the third part of this dissertation we address the problem of unsupervised action localization. Given unlabeled videos without annotations, this approach aims at: 1) Discovering action classes using a discriminative clustering approach, and 2) Localizing actions using a variant of Knapsack problem
Click Carving: Segmenting Objects in Video with Point Clicks
We present a novel form of interactive video object segmentation where a few
clicks by the user helps the system produce a full spatio-temporal segmentation
of the object of interest. Whereas conventional interactive pipelines take the
user's initialization as a starting point, we show the value in the system
taking the lead even in initialization. In particular, for a given video frame,
the system precomputes a ranked list of thousands of possible segmentation
hypotheses (also referred to as object region proposals) using image and motion
cues. Then, the user looks at the top ranked proposals, and clicks on the
object boundary to carve away erroneous ones. This process iterates (typically
2-3 times), and each time the system revises the top ranked proposal set, until
the user is satisfied with a resulting segmentation mask. Finally, the mask is
propagated across the video to produce a spatio-temporal object tube. On three
challenging datasets, we provide extensive comparisons with both existing work
and simpler alternative methods. In all, the proposed Click Carving approach
strikes an excellent balance of accuracy and human effort. It outperforms all
similarly fast methods, and is competitive or better than those requiring 2 to
12 times the effort.Comment: A preliminary version of the material in this document was filed as
University of Texas technical report no. UT AI16-0
- …