2 research outputs found
Enhancing image captioning with depth information using a Transformer-based framework
Captioning images is a challenging scene-understanding task that connects
computer vision and natural language processing. While image captioning models
have been successful in producing excellent descriptions, the field has
primarily focused on generating a single sentence for 2D images. This paper
investigates whether integrating depth information with RGB images can enhance
the captioning task and generate better descriptions. For this purpose, we
propose a Transformer-based encoder-decoder framework for generating a
multi-sentence description of a 3D scene. The RGB image and its corresponding
depth map are provided as inputs to our framework, which combines them to
produce a better understanding of the input scene. Depth maps could be ground
truth or estimated, which makes our framework widely applicable to any RGB
captioning dataset. We explored different fusion approaches to fuse RGB and
depth images. The experiments are performed on the NYU-v2 dataset and the
Stanford image paragraph captioning dataset. During our work with the NYU-v2
dataset, we found inconsistent labeling that prevents the benefit of using
depth information to enhance the captioning task. The results were even worse
than using RGB images only. As a result, we propose a cleaned version of the
NYU-v2 dataset that is more consistent and informative. Our results on both
datasets demonstrate that the proposed framework effectively benefits from
depth information, whether it is ground truth or estimated, and generates
better captions. Code, pre-trained models, and the cleaned version of the
NYU-v2 dataset will be made publically available.Comment: 19 pages, 5 figures, 13 table