124 research outputs found
D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding
Recent studies on dense captioning and visual grounding in 3D have achieved
impressive results. Despite developments in both areas, the limited amount of
available 3D vision-language data causes overfitting issues for 3D visual
grounding and 3D dense captioning methods. Also, how to discriminatively
describe objects in complex 3D environments is not fully studied yet. To
address these challenges, we present D3Net, an end-to-end neural
speaker-listener architecture that can detect, describe and discriminate. Our
D3Net unifies dense captioning and visual grounding in 3D in a self-critical
manner. This self-critical property of D3Net also introduces discriminability
during object caption generation and enables semi-supervised training on
ScanNet data with partially annotated descriptions. Our method outperforms SOTA
methods in both tasks on the ScanRefer dataset, surpassing the SOTA 3D dense
captioning method by a significant margin.Comment: Project website: https://daveredrum.github.io/D3Net
Compare and Reweight: Distinctive Image Captioning Using Similar Images Sets
A wide range of image captioning models has been developed, achieving
significant improvement based on popular metrics, such as BLEU, CIDEr, and
SPICE. However, although the generated captions can accurately describe the
image, they are generic for similar images and lack distinctiveness, i.e.,
cannot properly describe the uniqueness of each image. In this paper, we aim to
improve the distinctiveness of image captions through training with sets of
similar images. First, we propose a distinctiveness metric -- between-set CIDEr
(CIDErBtw) to evaluate the distinctiveness of a caption with respect to those
of similar images. Our metric shows that the human annotations of each image
are not equivalent based on distinctiveness. Thus we propose several new
training strategies to encourage the distinctiveness of the generated caption
for each image, which are based on using CIDErBtw in a weighted loss function
or as a reinforcement learning reward. Finally, extensive experiments are
conducted, showing that our proposed approach significantly improves both
distinctiveness (as measured by CIDErBtw and retrieval metrics) and accuracy
(e.g., as measured by CIDEr) for a wide variety of image captioning baselines.
These results are further confirmed through a user study
- …