1,137 research outputs found
Video Captioning via Hierarchical Reinforcement Learning
Video captioning is the task of automatically generating a textual
description of the actions in a video. Although previous work (e.g.
sequence-to-sequence model) has shown promising results in abstracting a coarse
description of a short video, it is still very challenging to caption a video
containing multiple fine-grained actions with a detailed description. This
paper aims to address the challenge by proposing a novel hierarchical
reinforcement learning framework for video captioning, where a high-level
Manager module learns to design sub-goals and a low-level Worker module
recognizes the primitive actions to fulfill the sub-goal. With this
compositional framework to reinforce video captioning at different levels, our
approach significantly outperforms all the baseline methods on a newly
introduced large-scale dataset for fine-grained video captioning. Furthermore,
our non-ensemble model has already achieved the state-of-the-art results on the
widely-used MSR-VTT dataset.Comment: CVPR 2018, with supplementary materia
Video Fill In the Blank using LR/RL LSTMs with Spatial-Temporal Attentions
Given a video and a description sentence with one missing word (we call it
the "source sentence"), Video-Fill-In-the-Blank (VFIB) problem is to find the
missing word automatically. The contextual information of the sentence, as well
as visual cues from the video, are important to infer the missing word
accurately. Since the source sentence is broken into two fragments: the
sentence's left fragment (before the blank) and the sentence's right fragment
(after the blank), traditional Recurrent Neural Networks cannot encode this
structure accurately because of many possible variations of the missing word in
terms of the location and type of the word in the source sentence. For example,
a missing word can be the first word or be in the middle of the sentence and it
can be a verb or an adjective. In this paper, we propose a framework to tackle
the textual encoding: Two separate LSTMs (the LR and RL LSTMs) are employed to
encode the left and right sentence fragments and a novel structure is
introduced to combine each fragment with an "external memory" corresponding the
opposite fragments. For the visual encoding, end-to-end spatial and temporal
attention models are employed to select discriminative visual representations
to find the missing word. In the experiments, we demonstrate the superior
performance of the proposed method on challenging VFIB problem. Furthermore, we
introduce an extended and more generalized version of VFIB, which is not
limited to a single blank. Our experiments indicate the generalization
capability of our method in dealing with such more realistic scenarios
A Neural Multi-sequence Alignment TeCHnique (NeuMATCH)
The alignment of heterogeneous sequential data (video to text) is an
important and challenging problem. Standard techniques for this task, including
Dynamic Time Warping (DTW) and Conditional Random Fields (CRFs), suffer from
inherent drawbacks. Mainly, the Markov assumption implies that, given the
immediate past, future alignment decisions are independent of further history.
The separation between similarity computation and alignment decision also
prevents end-to-end training. In this paper, we propose an end-to-end neural
architecture where alignment actions are implemented as moving data between
stacks of Long Short-term Memory (LSTM) blocks. This flexible architecture
supports a large variety of alignment tasks, including one-to-one, one-to-many,
skipping unmatched elements, and (with extensions) non-monotonic alignment.
Extensive experiments on semi-synthetic and real datasets show that our
algorithm outperforms state-of-the-art baselines.Comment: Accepted at CVPR 2018 (Spotlight). arXiv file includes the paper and
the supplemental materia
- …