2,796 research outputs found
A Study of Actor and Action Semantic Retention in Video Supervoxel Segmentation
Existing methods in the semantic computer vision community seem unable to
deal with the explosion and richness of modern, open-source and social video
content. Although sophisticated methods such as object detection or
bag-of-words models have been well studied, they typically operate on low level
features and ultimately suffer from either scalability issues or a lack of
semantic meaning. On the other hand, video supervoxel segmentation has recently
been established and applied to large scale data processing, which potentially
serves as an intermediate representation to high level video semantic
extraction. The supervoxels are rich decompositions of the video content: they
capture object shape and motion well. However, it is not yet known if the
supervoxel segmentation retains the semantics of the underlying video content.
In this paper, we conduct a systematic study of how well the actor and action
semantics are retained in video supervoxel segmentation. Our study has human
observers watching supervoxel segmentation videos and trying to discriminate
both actor (human or animal) and action (one of eight everyday actions). We
gather and analyze a large set of 640 human perceptions over 96 videos in 3
different supervoxel scales. Furthermore, we conduct machine recognition
experiments on a feature defined on supervoxel segmentation, called supervoxel
shape context, which is inspired by the higher order processes in human
perception. Our ultimate findings suggest that a significant amount of
semantics have been well retained in the video supervoxel segmentation and can
be used for further video analysis.Comment: This article is in review at the International Journal of Semantic
Computin
Stochastic Variational Inference for Hidden Markov Models
Variational inference algorithms have proven successful for Bayesian analysis
in large data settings, with recent advances using stochastic variational
inference (SVI). However, such methods have largely been studied in independent
or exchangeable data settings. We develop an SVI algorithm to learn the
parameters of hidden Markov models (HMMs) in a time-dependent data setting. The
challenge in applying stochastic optimization in this setting arises from
dependencies in the chain, which must be broken to consider minibatches of
observations. We propose an algorithm that harnesses the memory decay of the
chain to adaptively bound errors arising from edge effects. We demonstrate the
effectiveness of our algorithm on synthetic experiments and a large genomics
dataset where a batch algorithm is computationally infeasible.Comment: Appears in Advances in Neural Information Processing Systems (NIPS),
201
- …