4,616 research outputs found

    Obtaining referential word meanings from visual and distributional information: Experiments on object naming

    Get PDF
    Zarrieß S, Schlangen D. Obtaining referential word meanings from visual and distributional information: Experiments on object naming. In: Proceedings of 55th annual meeting of the Association for Computational Linguistics (ACL). Vancouver; 2017

    Easy Things First: Installments Improve Referring Expression Generation for Objects in Photographs

    Get PDF
    Zarrieß S, Schlangen D. Easy Things First: Installments Improve Referring Expression Generation for Objects in Photographs. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). 2016

    Resolving References to Objects in Photographs using the Words-As-Classifiers Model

    Get PDF
    Schlangen D, Zarrieß S, Kennington C. Resolving References to Objects in Photographs using the Words-As-Classifiers Model. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin; 2016

    Learning to Parse and Translate Improves Neural Machine Translation

    Full text link
    There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.Comment: Accepted as a short paper at the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017

    Analysing Lexical Semantic Change with Contextualised Word Representations

    Get PDF
    This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations. We propose a novel method that exploits the BERT neural language model to obtain representations of word usages, clusters these representations into usage types, and measures change along time with three proposed metrics. We create a new evaluation dataset and show that the model representations and the detected semantic shifts are positively correlated with human judgements. Our extensive qualitative analysis demonstrates that our method captures a variety of synchronic and diachronic linguistic phenomena. We expect our work to inspire further research in this direction.Comment: To appear in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL-2020

    Unsupervised Learning of Style-sensitive Word Vectors

    Full text link
    This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of style-sensitive word embeddings.Comment: 7 pages, Accepted at The 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018

    Topically Driven Neural Language Model

    Full text link
    Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics.Comment: 11 pages, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) (to appear

    Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation

    Full text link
    Rating scales are a widely used method for data annotation; however, they present several challenges, such as difficulty in maintaining inter- and intra-annotator consistency. Best-worst scaling (BWS) is an alternative method of annotation that is claimed to produce high-quality annotations while keeping the required number of annotations similar to that of rating scales. However, the veracity of this claim has never been systematically established. Here for the first time, we set up an experiment that directly compares the rating scale method with BWS. We show that with the same total number of annotations, BWS produces significantly more reliable results than the rating scale.Comment: In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada, 201
    corecore