1,236 research outputs found

    What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?

    Full text link
    In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary `generation' component. This view suggests that the image features should be `injected' into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators.Comment: Appears in: Proceedings of the 10th International Conference on Natural Language Generation (INLG'17

    Where to put the image in an image caption generator

    Get PDF
    When a neural language model is used for caption generation, the image information can be fed to the neural network either by directly in- corporating it in a recurrent neural network { conditioning the language model by injecting image features { or in a layer following the recurrent neural network { conditioning the language model by merging the image features. While merging implies that visual features are bound at the end of the caption generation process, injecting can bind the visual features at a variety stages. In this paper we empirically show that late binding is superior to early binding in terms of di erent evaluation metrics. This suggests that the di erent modalities (visual and linguistic) for caption generation should not be jointly encoded by the RNN; rather, the multi- modal integration should be delayed to a subsequent stage. Furthermore, this suggests that recurrent neural networks should not be viewed as actu- ally generating text, but only as encoding it for prediction in a subsequent layer.peer-reviewe

    What is the role of recurrent neural networks (RNNs) in an image caption generator?

    Get PDF
    In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary `generation' component. This view suggests that the image features should be `injected' into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators.peer-reviewe

    DeepOpht: Medical Report Generation for Retinal Images via Deep Models and Visual Explanation

    Full text link
    In this work, we propose an AI-based method that intends to improve the conventional retinal disease treatment procedure and help ophthalmologists increase diagnosis efficiency and accuracy. The proposed method is composed of a deep neural networks-based (DNN-based) module, including a retinal disease identifier and clinical description generator, and a DNN visual explanation module. To train and validate the effectiveness of our DNN-based module, we propose a large-scale retinal disease image dataset. Also, as ground truth, we provide a retinal image dataset manually labeled by ophthalmologists to qualitatively show, the proposed AI-based method is effective. With our experimental results, we show that the proposed method is quantitatively and qualitatively effective. Our method is capable of creating meaningful retinal image descriptions and visual explanations that are clinically relevant.Comment: Accepted to IEEE WACV 202

    Linking Image and Text with 2-Way Nets

    Full text link
    Linking two data sources is a basic building block in numerous computer vision problems. Canonical Correlation Analysis (CCA) achieves this by utilizing a linear optimizer in order to maximize the correlation between the two views. Recent work makes use of non-linear models, including deep learning techniques, that optimize the CCA loss in some feature space. In this paper, we introduce a novel, bi-directional neural network architecture for the task of matching vectors from two data sources. Our approach employs two tied neural network channels that project the two views into a common, maximally correlated space using the Euclidean loss. We show a direct link between the correlation-based loss and Euclidean loss, enabling the use of Euclidean loss for correlation maximization. To overcome common Euclidean regression optimization problems, we modify well-known techniques to our problem, including batch normalization and dropout. We show state of the art results on a number of computer vision matching tasks including MNIST image matching and sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.Comment: 14 pages, 2 figures, 6 table

    Storyboard tools for university and education research projects

    Full text link
    [EN] This paper is focused on the presentation of storyboard and storytelling open source online tools, for its application in the university context and in the education research projects. The main aim of this paper is to provide an open source tool to support (i) university teachers, using storyboard tools as a novel educational resource to include in their master and practice classes, allowing them to structure concepts or explain methodologies through images that have attached short descriptions; (ii) university students, as future industrial engineers, employing storyboard tools for structuring the decision-making process, by taking into account all the actors that are affected in the decision process; (iii) education research projects, adopting storyboard as a tool to aid the creative writing through matching creative images with keywords to capture the essence of the research project.The research leading to these results has received funding from European Community's H2020 Programme (H2020/2014-2020) under grant agreement no 636909, "Cloud Collaborative Manufacturing Networks (C2NET)".Andres, B.; Poler, R. (2017). Storyboard tools for university and education research projects. INTED proceedings (Online). 220-227. https://doi.org/10.21125/inted.2017.0173S22022
    corecore