11 research outputs found

    Analytical Methods for Interpretable Ultradense Word Embeddings

    Get PDF
    Word embeddings are useful for a wide vari- ety of tasks, but they lack interpretability. By rotating word spaces, interpretable dimensions can be identified while preserving the informa- tion contained in the embeddings without any loss. In this work, we investigate three meth- ods for making word spaces interpretable by rotation: Densifier (Rothe et al., 2016), linear SVMs and DensRay, a new method we pro- pose. In contrast to Densifier, DensRay can be computed in closed form, is hyperparameter- free and thus more robust than Densifier. We evaluate the three methods on lexicon induc- tion and set-based word analogy. In addition we provide qualitative insights as to how inter- pretable word spaces can be used for removing gender bias from embeddings

    Distributed representations for multilingual language processing

    Get PDF
    Distributed representations are a central element in natural language processing. Units of text such as words, ngrams, or characters are mapped to real-valued vectors so that they can be processed by computational models. Representations trained on large amounts of text, called static word embeddings, have been found to work well across a variety of tasks such as sentiment analysis or named entity recognition. More recently, pretrained language models are used as contextualized representations that have been found to yield even better task performances. Multilingual representations that are invariant with respect to languages are useful for multiple reasons. Models using those representations would only require training data in one language and still generalize across multiple languages. This is especially useful for languages that exhibit data sparsity. Further, machine translation models can benefit from source and target representations in the same space. Last, knowledge extraction models could not only access English data, but data in any natural language and thus exploit a richer source of knowledge. Given that several thousand languages exist in the world, the need for multilingual language processing seems evident. However, it is not immediately clear, which properties multilingual embeddings should exhibit, how current multilingual representations work and how they could be improved. This thesis investigates some of these questions. In the first publication, we explore the boundaries of multilingual representation learning by creating an embedding space across more than one thousand languages. We analyze existing methods and propose concept based embedding learning methods. The second paper investigates differences between creating representations for one thousand languages with little data versus considering few languages with abundant data. In the third publication, we refine a method to obtain interpretable subspaces of embeddings. This method can be used to investigate the workings of multilingual representations. The fourth publication finds that multilingual pretrained language models exhibit a high degree of multilinguality in the sense that high quality word alignments can be easily extracted. The fifth paper investigates reasons why multilingual pretrained language models are multilingual despite lacking any kind of crosslingual supervision during training. Based on our findings we propose a training scheme that leads to improved multilinguality. Last, the sixth paper investigates the use of multilingual pretrained language models as multilingual knowledge bases

    Multilingual opinion mining

    Get PDF
    170 p.Cada día se genera gran cantidad de texto en diferentes medios online. Gran parte de ese texto contiene opiniones acerca de multitud de entidades, productos, servicios, etc. Dada la creciente necesidad de disponer de medios automatizados para analizar, procesar y explotar esa información, las técnicas de análisis de sentimiento han recibido gran cantidad de atención por parte de la industria y la comunidad científica durante la última década y media. No obstante, muchas de las técnicas empleadas suelen requerir de entrenamiento supervisado utilizando para ello ejemplos anotados manualmente, u otros recursos lingüísticos relacionados con un idioma o dominio de aplicación específicos. Esto limita la aplicación de este tipo de técnicas, ya que dicho recursos y ejemplos anotados no son sencillos de obtener. En esta tesis se explora una serie de métodos para realizar diversos análisis automáticos de texto en el marco del análisis de sentimiento, incluyendo la obtención automática de términos de un dominio, palabras que expresan opinión, polaridad del sentimiento de dichas palabras (positivas o negativas), etc. Finalmente se propone y se evalúa un método que combina representación continua de palabras (continuous word embeddings) y topic-modelling inspirado en la técnica de Latent Dirichlet Allocation (LDA), para obtener un sistema de análisis de sentimiento basado en aspectos (ABSA), que sólo necesita unas pocas palabras semilla para procesar textos de un idioma o dominio determinados. De este modo, la adaptación a otro idioma o dominio se reduce a la traducción de las palabras semilla correspondientes

    Combining contextualized and non-contextualized embeddings for domain adaptation and beyond

    Get PDF

    Vector Semantics

    Get PDF
    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have ‘linguistics’ in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings

    Vector Semantics

    Get PDF
    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have ‘linguistics’ in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings
    corecore