1 research outputs found
Empirical Autopsy of Deep Video Captioning Frameworks
Contemporary deep learning based video captioning follows encoder-decoder
framework. In encoder, visual features are extracted with 2D/3D Convolutional
Neural Networks (CNNs) and a transformed version of those features is passed to
the decoder. The decoder uses word embeddings and a language model to map
visual features to natural language captions. Due to its composite nature, the
encoder-decoder pipeline provides the freedom of multiple choices for each of
its components, e.g the choices of CNNs models, feature transformations, word
embeddings, and language models etc. Component selection can have drastic
effects on the overall video captioning performance. However, current
literature is void of any systematic investigation in this regard. This article
fills this gap by providing the first thorough empirical analysis of the role
that each major component plays in a contemporary video captioning pipeline. We
perform extensive experiments by varying the constituent components of the
video captioning framework, and quantify the performance gains that are
possible by mere component selection. We use the popular MSVD dataset as the
test-bed, and demonstrate that substantial performance gains are possible by
careful selection of the constituent components without major changes to the
pipeline itself. These results are expected to provide guiding principles for
future research in the fast growing direction of video captioning.Comment: 09 pages, 05 figure