4,580 research outputs found
Deep attentive video summarization with distribution consistency learning
This article studies supervised video summarization by formulating it into a sequence-to-sequence learning framework, in which the input and output are sequences of original video frames and their predicted importance scores, respectively. Two critical issues are addressed in this article: short-term contextual attention insufficiency and distribution inconsistency. The former lies in the insufficiency of capturing the short-term contextual attention information within the video sequence itself since the existing approaches focus a lot on the long-term encoder-decoder attention. The latter refers to the distributions of predicted importance score sequence and the ground-truth sequence is inconsistent, which may lead to a suboptimal solution. To better mitigate the first issue, we incorporate a self-attention mechanism in the encoder to highlight the important keyframes in a short-term context. The proposed approach alongside the encoder-decoder attention constitutes our deep attentive models for video summarization. For the second one, we propose a distribution consistency learning method by employing a simple yet effective regularization loss term, which seeks a consistent distribution for the two sequences. Our final approach is dubbed as Attentive and Distribution consistent video Summarization (ADSum). Extensive experiments on benchmark data sets demonstrate the superiority of the proposed ADSum approach against state-of-the-art approaches
Summarizing First-Person Videos from Third Persons' Points of Views
Video highlight or summarization is among interesting topics in computer
vision, which benefits a variety of applications like viewing, searching, or
storage. However, most existing studies rely on training data of third-person
videos, which cannot easily generalize to highlight the first-person ones. With
the goal of deriving an effective model to summarize first-person videos, we
propose a novel deep neural network architecture for describing and
discriminating vital spatiotemporal information across videos with different
points of view. Our proposed model is realized in a semi-supervised setting, in
which fully annotated third-person videos, unlabeled first-person videos, and a
small number of annotated first-person ones are presented during training. In
our experiments, qualitative and quantitative evaluations on both benchmarks
and our collected first-person video datasets are presented.Comment: 16+10 pages, ECCV 201
Query-Focused Video Summarization: Dataset, Evaluation, and A Memory Network Based Approach
Recent years have witnessed a resurgence of interest in video summarization.
However, one of the main obstacles to the research on video summarization is
the user subjectivity - users have various preferences over the summaries. The
subjectiveness causes at least two problems. First, no single video summarizer
fits all users unless it interacts with and adapts to the individual users.
Second, it is very challenging to evaluate the performance of a video
summarizer.
To tackle the first problem, we explore the recently proposed query-focused
video summarization which introduces user preferences in the form of text
queries about the video into the summarization process. We propose a memory
network parameterized sequential determinantal point process in order to attend
the user query onto different video frames and shots. To address the second
challenge, we contend that a good evaluation metric for video summarization
should focus on the semantic information that humans can perceive rather than
the visual features or temporal overlaps. To this end, we collect dense
per-video-shot concept annotations, compile a new dataset, and suggest an
efficient evaluation method defined upon the concept annotations. We conduct
extensive experiments contrasting our video summarizer to existing ones and
present detailed analyses about the dataset and the new evaluation method
- …