Recently, much advance has been made in image captioning, and an
encoder-decoder framework has achieved outstanding performance for this task.
In this paper, we propose an extension of the encoder-decoder framework by
adding a component called guiding network. The guiding network models the
attribute properties of input images, and its output is leveraged to compose
the input of the decoder at each time step. The guiding network can be plugged
into the current encoder-decoder framework and trained in an end-to-end manner.
Hence, the guiding vector can be adaptively learned according to the signal
from the decoder, making itself to embed information from both image and
language. Additionally, discriminative supervision can be employed to further
improve the quality of guidance. The advantages of our proposed approach are
verified by experiments carried out on the MS COCO dataset.Comment: AAAI-1