215 research outputs found
Read, Watch, and Move: Reinforcement Learning for Temporally Grounding Natural Language Descriptions in Videos
The task of video grounding, which temporally localizes a natural language
description in a video, plays an important role in understanding videos.
Existing studies have adopted strategies of sliding window over the entire
video or exhaustively ranking all possible clip-sentence pairs in a
pre-segmented video, which inevitably suffer from exhaustively enumerated
candidates. To alleviate this problem, we formulate this task as a problem of
sequential decision making by learning an agent which regulates the temporal
grounding boundaries progressively based on its policy. Specifically, we
propose a reinforcement learning based framework improved by multi-task
learning and it shows steady performance gains by considering additional
supervised boundary information during training. Our proposed framework
achieves state-of-the-art performance on ActivityNet'18 DenseCaption dataset
and Charades-STA dataset while observing only 10 or less clips per video.Comment: AAAI 201
Deep Learning for Dense Interpretation of Video: Survey of Various Approach, Challenges, Datasets and Metrics
Video interpretation has garnered considerable attention in computer vision and natural language processing fields due to the rapid expansion of video data and the increasing demand for various applications such as intelligent video search, automated video subtitling, and assistance for visually impaired individuals. However, video interpretation presents greater challenges due to the inclusion of both temporal and spatial information within the video. While deep learning models for images, text, and audio have made significant progress, efforts have recently been focused on developing deep networks for video interpretation. A thorough evaluation of current research is necessary to provide insights for future endeavors, considering the myriad techniques, datasets, features, and evaluation criteria available in the video domain. This study offers a survey of recent advancements in deep learning for dense video interpretation, addressing various datasets and the challenges they present, as well as key features in video interpretation. Additionally, it provides a comprehensive overview of the latest deep learning models in video interpretation, which have been instrumental in activity identification and video description or captioning. The paper compares the performance of several deep learning models in this field based on specific metrics. Finally, the study summarizes future trends and directions in video interpretation
Multi-modal Dense Video Captioning
Dense video captioning is a task of localizing interesting events from an
untrimmed video and producing textual description (captions) for each localized
event. Most of the previous works in dense video captioning are solely based on
visual information and completely ignore the audio track. However, audio, and
speech, in particular, are vital cues for a human observer in understanding an
environment. In this paper, we present a new dense video captioning approach
that is able to utilize any number of modalities for event description.
Specifically, we show how audio and speech modalities may improve a dense video
captioning model. We apply automatic speech recognition (ASR) system to obtain
a temporally aligned textual description of the speech (similar to subtitles)
and treat it as a separate input alongside video frames and the corresponding
audio track. We formulate the captioning task as a machine translation problem
and utilize recently proposed Transformer architecture to convert multi-modal
input data into textual descriptions. We demonstrate the performance of our
model on ActivityNet Captions dataset. The ablation studies indicate a
considerable contribution from audio and speech components suggesting that
these modalities contain substantial complementary information to video frames.
Furthermore, we provide an in-depth analysis of the ActivityNet Caption results
by leveraging the category tags obtained from original YouTube videos. Code is
publicly available: github.com/v-iashin/MDVCComment: To appear in the proceedings of CVPR Workshops 2020; Code:
https://github.com/v-iashin/MDVC Project Page:
https://v-iashin.github.io/mdv
- …