640 research outputs found
Geodesic Distance Histogram Feature for Video Segmentation
This paper proposes a geodesic-distance-based feature that encodes global
information for improved video segmentation algorithms. The feature is a joint
histogram of intensity and geodesic distances, where the geodesic distances are
computed as the shortest paths between superpixels via their boundaries. We
also incorporate adaptive voting weights and spatial pyramid configurations to
include spatial information into the geodesic histogram feature and show that
this further improves results. The feature is generic and can be used as part
of various algorithms. In experiments, we test the geodesic histogram feature
by incorporating it into two existing video segmentation frameworks. This leads
to significantly better performance in 3D video segmentation benchmarks on two
datasets
A Temporal Sequence Learning for Action Recognition and Prediction
In this work\footnote {This work was supported in part by the National
Science Foundation under grant IIS-1212948.}, we present a method to represent
a video with a sequence of words, and learn the temporal sequencing of such
words as the key information for predicting and recognizing human actions. We
leverage core concepts from the Natural Language Processing (NLP) literature
used in sentence classification to solve the problems of action prediction and
action recognition. Each frame is converted into a word that is represented as
a vector using the Bag of Visual Words (BoW) encoding method. The words are
then combined into a sentence to represent the video, as a sentence. The
sequence of words in different actions are learned with a simple but effective
Temporal Convolutional Neural Network (T-CNN) that captures the temporal
sequencing of information in a video sentence. We demonstrate that a key
characteristic of the proposed method is its low-latency, i.e. its ability to
predict an action accurately with a partial sequence (sentence). Experiments on
two datasets, \textit{UCF101} and \textit{HMDB51} show that the method on
average reaches 95\% of its accuracy within half the video frames. Results,
also demonstrate that our method achieves compatible state-of-the-art
performance in action recognition (i.e. at the completion of the sentence) in
addition to action prediction.Comment: 10 pages, 8 figures, 2018 IEEE Winter Conference on Applications of
Computer Vision (WACV
Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks
Foundational multimodal models pre-trained on large scale image-text pairs or
video-text pairs or both have shown strong generalization abilities on
downstream tasks. However unlike image-text models, pretraining video-text
models is always not feasible due to the difficulty in collecting large-scale
clean and aligned data, and exponential computational costs involved in the
pretraining phase. Therefore, the pertinent question to ask is: Can image-text
models be adapted to video tasks and is there any benefit to using these models
over pretraining directly on videos? In this work, we focus on this question by
proposing a detailed study on the generalization abilities of image-text models
when evaluated on video understanding tasks in a zero-shot setting. We
investigate 9 foundational image-text models on a diverse set of video tasks
that include video action recognition (video AR), video retrieval (video RT),
video question answering (video QA), video multiple choice (video MC) and video
captioning (video CP). Our experiments show that image-text models exhibit
impressive performance on video AR, video RT and video MC. Furthermore, they
perform moderately on video captioning and poorly on video QA. These findings
shed a light on the benefits of adapting foundational image-text models to an
array of video tasks while avoiding the costly pretraining step
- …