8,077 research outputs found

    Contextual Effects on Metaphor Comprehension: Experiment and Simulation

    Get PDF
    This paper presents a computational model of referential metaphor comprehension. This model is designed on top of Latent Semantic Analysis (LSA), a model of the representation of word and text meanings. Compre­hending a referential metaphor consists in scanning the semantic neighbors of the metaphor in order to find words that are also semantically related to the context. The depth of that search is compared to the time it takes for humans to process a metaphor. In particular, we are interested in two independent variables : the nature of the reference (either a literal meaning or a figurative meaning) and the nature of the context (inductive or not inductive). We show that, for both humans and model, first, metaphors take longer to process than the literal meanings and second, an inductive context can shorten the processing time

    Attend to You: Personalized Image Captioning with Context Sequence Memory Networks

    Get PDF
    We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the user's active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.Comment: Accepted paper at CVPR 201

    The Way of the Gift

    Get PDF
    (Excerpt) In his classic work on stewardship Helge Brattgard said that it is only as the Spirit of God, working through Word and Sacrament, leads [people] to be grateful for spiritual and material gifts received, and to see their responsibility for the administration of these gifts, that congregational life can result. 1 Unfortunately, after making this wonderful assertion, he like most other writers on stewardship remained surprisingly silent about how liturgical action a~d the broader life of the Christian shape one another

    Construction of Geometries Based on Automatic Text Interpretation

    Get PDF
    When dealing with expanding systems, like the universe or the economy, due to its constant expansion, the statistical error is remarkably high to allow an understanding of the behaviour of the system. Thus, arises the necessity to transform an expanding system into a more straightforward system to work with. In order to address the problems, a geometric word space was constructed based on automatic text interpretation. News articles and economic reports about the European Union were collected and, using Python scripts, were cleaned and used to train a Word2Vec model. The trained model created multi-dimensional word spaces from three periods of the last decades, a pre-2000 period, a 2000-2008 period and a post-2008 period. After the interpretation of the created word spaces, a difference in behaviour between the country names was noticed. All the European Union member states were getting closer to each other until 2008, but after that year there was an abrupt rupture in this trend and ever country drifted apart. This behaviour can be linked with the 2008 financial crisis, though more research is needed to confirm this behaviour and hopefully found other correlations connecting the word spaces be- haviour and the real world. An improvement in the quantity and quality of the corpus will certainly improve the accuracy of the world space and enable a better understanding of the word spaces behaviour.Ao trabalhar com sistemas em expansão, como o universo ou a economia, o erro estatístico, decorrente da sua dilatação, é demasiado alto para permitir a compreensão dos comportamentos do sistema. Daí surgiu a necessidade de transformar um sistema em expansão num sistema cuja compreensão fosse mais acessível. Para resolver este problema, foi criado um espaço geométrico constituído por palavras através da interpretação automática de texto. Foram recolhidas noticias e relatórios sobre a situação económica da União Europeia e, usando scripts escritos em Python, foram limpos e utilizados para treinar um modelo de Word2Vec. O modelo treinado de Word2Vec criou três espaços multidimensionais constituídos por palavras em períodos diferentes. Um dos espaços foi construído com dados anteriores a 2000, outro com dados entre 2000 e 2008 e por último um com dados posteriores a 2008. Após a interpretação dos espaços criados, foi evidente uma grande mudança de comportamento entre os objetos que representam os nomes dos países. Todos os nomes dos estados membros da União Europeia estavam a aproximar-se até ao ano de 2008, no entanto, após esse ano, este comportamento susteve-se abruptamente e todos os países se afastaram. Este comportamento poderá estar ligado com a crise financeira de 2008, no entanto é necessária mais investigação para confirmar este comportamento e encontrar mais correlações entre o espaço criado e o mundo real. Um aumento na quantidade e qualidade da coleção de textos irá certamente melhorar a precisão na construção do espaço e contribuir para uma melhor compreensão dos comportamentos dos espaços criados

    Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation

    Full text link
    We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish)
    corecore