73 research outputs found

    Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation

    Full text link
    Following the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/syntax and similarity/relatedness. In this paper, we show that each embedding model captures more information than directly apparent. A linear transformation that adjusts the similarity order of the model without any external resource can tailor it to achieve better results in those aspects, providing a new perspective on how embeddings encode divergent linguistic information. In addition, we explore the relation between intrinsic and extrinsic evaluation, as the effect of our transformations in downstream tasks is higher for unsupervised systems than for supervised ones.Comment: CoNLL 201

    Exploring the Suitability of Semantic Spaces as Word Association Models for the Extraction of Semantic Relationships

    Get PDF
    Given the recent advances and progress in Natural Language Processing (NLP), extraction of semantic relationships has been at the top of the research agenda in the last few years. This work has been mainly motivated by the fact that building knowledge graphs (KG) and bases (KB), as a key ingredient of intelligent applications, is a never-ending challenge, since new knowledge needs to be harvested while old knowledge needs to be revised. Currently, approaches towards relation extraction from text are dominated by neural models practicing some sort of distant (weak) supervision in machine learning from large corpora, with or without consulting external knowledge sources. In this paper, we empirically study and explore the potential of a novel idea of using classical semantic spaces and models, e.g., Word Embedding, generated for extracting word association, in conjunction with relation extraction approaches. The goal is to use these word association models to reinforce current relation extraction approaches. We believe that this is a first attempt of this kind and the results of the study should shed some light on the extent to which these word association models can be used as well as the most promising types of relationships to be considered for extraction

    Quantifying the Contextualization of Word Representations with Semantic Class Probing

    Get PDF
    Pretrained language models have achieved a new state of the art on many NLP tasks, but there are still many open questions about how and why they work so well. We investigate the contextualization of words in BERT. We quantify the amount of contextualization, i.e., how well words are interpreted in context, by studying the extent to which semantic classes of a word can be inferred from its contextualized embeddings. Quantifying contextualization helps in understanding and utilizing pretrained language models. We show that top layer representations achieve high accuracy inferring semantic classes; that the strongest contextualization effects occur in the lower layers; that local context is mostly sufficient for semantic class inference; and that top layer representations are more task-specific after finetuning while lower layer representations are more transferable. Finetuning uncovers task related features, but pretrained knowledge is still largely preserved

    Undesirable biases in NLP: Averting a crisis of measurement

    Get PDF
    As Natural Language Processing (NLP) technology rapidly develops and spreads into daily life, it becomes crucial to anticipate how its use could harm people. However, our ways of assessing the biases of NLP models have not kept up. While especially the detection of English gender bias in such models has enjoyed increasing research attention, many of the measures face serious problems, as it is often unclear what they actually measure and how much they are subject to measurement error. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics -- a field specialized in the measurement of concepts like bias that are not directly observable. We pair an introduction of relevant psychometric concepts with a discussion of how they could be used to evaluate and improve bias measures. We also argue that adopting psychometric vocabulary and methodology can make NLP bias research more efficient and transparent

    Understanding Word Embedding Stability Across Languages and Applications

    Full text link
    Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations. In this thesis, we consider several aspects of embedding spaces, including their stability. First, we propose a definition of stability, and show that common English word embeddings are surprisingly unstable. We explore how properties of data, words, and algorithms relate to instability. We extend this work to approximately 100 world languages, considering how linguistic typology relates to stability. Additionally, we consider contextualized output embedding spaces. Using paraphrases, we explore properties and assumptions of BERT, a popular embedding algorithm. Second, we consider how stability and other word embedding properties affect tasks where embeddings are commonly used. We consider both word embeddings used as features in downstream applications and corpus-centered applications, where embeddings are used to study characteristics of language and individual writers. In addition to stability, we also consider other word embedding properties, specifically batching and curriculum learning, and how methodological choices made for these properties affect downstream tasks. Finally, we consider how knowledge of stability affects how we use word embeddings. Throughout this thesis, we discuss strategies to mitigate instability and provide analyses highlighting the strengths and weaknesses of word embeddings in different scenarios and languages. We show areas where more work is needed to improve embeddings, and we show where embeddings are already a strong tool.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162917/1/lburdick_1.pd
    • …
    corecore