2 research outputs found
Sentence Specified Dynamic Video Thumbnail Generation
With the tremendous growth of videos over the Internet, video thumbnails,
providing video content previews, are becoming increasingly crucial to
influencing users' online searching experiences. Conventional video thumbnails
are generated once purely based on the visual characteristics of videos, and
then displayed as requested. Hence, such video thumbnails, without considering
the users' searching intentions, cannot provide a meaningful snapshot of the
video contents that users concern. In this paper, we define a distinctively new
task, namely sentence specified dynamic video thumbnail generation, where the
generated thumbnails not only provide a concise preview of the original video
contents but also dynamically relate to the users' searching intentions with
semantic correspondences to the users' query sentences. To tackle such a
challenging task, we propose a novel graph convolved video thumbnail pointer
(GTP). Specifically, GTP leverages a sentence specified video graph
convolutional network to model both the sentence-video semantic interaction and
the internal video relationships incorporated with the sentence information,
based on which a temporal conditioned pointer network is then introduced to
sequentially generate the sentence specified video thumbnails. Moreover, we
annotate a new dataset based on ActivityNet Captions for the proposed new task,
which consists of 10,000+ video-sentence pairs with each accompanied by an
annotated sentence specified video thumbnail. We demonstrate that our proposed
GTP outperforms several baseline methods on the created dataset, and thus
believe that our initial results along with the release of the new dataset will
inspire further research on sentence specified dynamic video thumbnail
generation. Dataset and code are available at https://github.com/yytzsy/GTP