8 research outputs found

    Filaments of Meaning in Word Space

    Get PDF
    Word space models, in the sense of vector space models built on distributional data taken from texts, are used to model semantic relations between words. We argue that the high dimensionality of typical vector space models lead to unintuitive effects on modeling likeness of meaning and that the local structure of word spaces is where interesting semantic relations reside. We show that the local structure of word spaces has substantially different dimensionality and character than the global space and that this structure shows potential to be exploited for further semantic analysis using methods for local analysis of vector space structure rather than globally scoped methods typically in use today such as singular value decomposition or principal component analysis

    Retrieving Multi-Entity Associations: An Evaluation of Combination Modes for Word Embeddings

    Full text link
    Word embeddings have gained significant attention as learnable representations of semantic relations between words, and have been shown to improve upon the results of traditional word representations. However, little effort has been devoted to using embeddings for the retrieval of entity associations beyond pairwise relations. In this paper, we use popular embedding methods to train vector representations of an entity-annotated news corpus, and evaluate their performance for the task of predicting entity participation in news events versus a traditional word cooccurrence network as a baseline. To support queries for events with multiple participating entities, we test a number of combination modes for the embedding vectors. While we find that even the best combination modes for word embeddings do not quite reach the performance of the full cooccurrence network, especially for rare entities, we observe that different embedding methods model different types of relations, thereby indicating the potential for ensemble methods.Comment: 4 pages; Accepted at SIGIR'1

    Mining Meaning from Text by Harvesting Frequent and Diverse Semantic Itemsets

    Get PDF
    Abstract. In this paper, we present a novel and completely-unsupervised approach to unravel meanings (or senses) from linguistic constructions found in large corpora by introducing the concept of semantic vector. A semantic vector is a space-transformed vector where features repre-sent fine-grained semantic information units, instead of values of co-occurrences within a collection of texts. More in detail, instead of seeing words as vectors of frequency values, we propose to first explode words into a multitude of tiny semantic information retrieved from existing re-sources like WordNet and ConceptNet, and then clustering them into frequent and diverse patterns. This way, on the one hand, we are able to model linguistic data with a larger but much more dense and informa-tive semantic feature space. On the other hand, being the model based on basic and conceptual information, we are also able to generate new data by querying the above-mentioned semantic resources with the fea-tures contained in the extracted patterns. We experimented the idea on a dataset of 640 millions of triples subject-verb-object to automatically inducing senses for specific input verbs, demonstrating the validity and the potential of the presented approach in modeling and understanding natural language

    Machine Learning and Clinical Text. Supporting Health Information Flow

    Get PDF
    Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-­effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.Siirretty Doriast

    Computational explorations of semantic cognition

    Get PDF
    Motivated by the widespread use of distributional models of semantics within the cognitive science community, we follow a computational modelling approach in order to better understand and expand the applicability of such models, as well as to test potential ways in which they can be improved and extended. We review evidence in favour of the assumption that distributional models capture important aspects of semantic cognition. We look at the models’ ability to account for behavioural data and fMRI patterns of brain activity, and investigate the structure of model-based, semantic networks. We test whether introducing affective information, obtained from a neural network model designed to predict emojis from co-occurring text, can improve the performance of linguistic and linguistic-visual models of semantics, in accounting for similarity/relatedness ratings. We find that adding visual and affective representations improves performance, especially for concrete and abstract words, respectively. We describe a processing model based on distributional semantics, in which activation spreads throughout a semantic network, as dictated by the patterns of semantic similarity between words. We show that the activation profile of the network, measured at various time points, can account for response time and accuracies in lexical and semantic decision tasks, as well as for concreteness/imageability and similarity/relatedness ratings. We evaluate the differences between concrete and abstract words, in terms of the structure of the semantic networks derived from distributional models of semantics. We examine how the structure is related to a number of factors that have been argued to differ between concrete and abstract words, namely imageability, age of acquisition, hedonic valence, contextual diversity, and semantic diversity. We use distributional models to explore factors that might be responsible for the poor linguistic performance of children suffering from Developmental Language Disorder. Based on the assumption that certain model parameters can be given a psychological interpretation, we start from “healthy” models, and generate “lesioned” models, by manipulating the parameters. This allows us to determine the importance of each factor, and their effects with respect to learning concrete vs abstract words

    Meaning in Distributions : A Study on Computational Methods in Lexical Semantics

    Get PDF
    This study investigates the connection between lexical items' distributions and their meanings from the perspective of computational distributional operations. When applying computational methods in meaning-related research, it is customary to refer to the so-called distributional hypothesis, according to which differences in distributions and meanings are mutually correlated. However, making use of such a hypothesis requires critical explication of the concept of distribution and plausible arguments for why any particular distributional structure is connected to a particular meaning-related phenomenon. In broad strokes, the present study seeks to chart the major differences in how the concept of distribution is conceived in structuralist/autonomous and usage-based/functionalist theoretical families of contemporary linguistics. The two theoretical positions on distributions are studied for identifying how meanings could enter as enabling or constraining factors in them. The empirical part of the study comprises two case studies. In the first one, three pairs of antonymical adjectives (köyhä/rikas, sairas/terve and vanha/nuori) are studied distributionally. Very narrow bag-of-word vector representations of distributions show how the dimensions on which relevant distributional similarities are based already conflate unexpected and varied range of linguistic phenomena, spanning from syntax-oriented conceptual constrainment to connotations, pragmatic patterns and affectivity. Thus, the results simultaneously corroborate the distributional hypothesis and challenge its over-generalized, uncritical applicability. For the study of meaning, distributional and semantic spaces cannot be treated as analogous by default. In the second case study, a distributional operation is purposefully built for answering a research question related to historical development of Finnish social law terminology in the period of 1860–1910. Using a method based on interlinked collocation networks, the study shows how the term vaivainen (‘pauper, beggar, measly’) receded from the prestigious legal and administrative registers during the studied period. Corroborating some of the findings of the previous parts of this dissertation, the case study shows how structures found in distributional representations cannot be satisfactorily explained without relying on semantic, pragmatic and discoursal interpretations. The analysis leads to confirming the timeline of the studied word use in the given register. It also shows how the distributional methods based on networked patterns of co-occurrence highlight incomparable structures of very different nature and skew towards frequent occurrence types prevalent in the data.Nykyaikaiset laskennalliset menetelmät suorittavat suurista tekstiaineistoista koottujen tilastollisten mallien avulla lähes virheettömästi monia sanojen merkitysten ymmärtämistä edellyttäviä tehtäviä. Kielitieteellisen metodologian kannalta onkin kiinnostavaa, miten tällaiset menetelmät sopivat kiellisten rakenteiden merkitysten lingvistiseen tutkimukseen. Tämä väitöstutkimus lähestyy kysymystä sanasemantiikan näkökulmasta ja pyrkii sekä teoreettisesti että empiirisesti kuvaamaan minkälaisia merkityksen lajeja pelkkiin sanojen sekvensseihin perustuvat laskennalliset menetelmät kykenevät tavoittamaan. Väitöstutkimus koostuu kahdesta osatutkimuksesta, joista ensimmäisessä tutkitaan kolmea vastakohtaista adjektiiviparia Suomi24-aineistosta kootun vektoriavaruusmallin avulla. Tulokset osoittavat, miten jo hyvin rajatut sekvenssiympäristöt sisältävät informaatiota käsitteellisten merkitysten lisäksi myös muun muassa niiden konnotaatioista ja affektiivisuudesta. Sekvenssiympäristön tuottama kuva merkityksestä on kuitenkin kattavuudeltaan ennalta-arvaamaton ja ne kielekäyttötavat, jotka tutkimusaineistossa ovat yleisiä vaikuttavat selvästi siihen mitä merkityksen piirteitä tulee näkyviin. Toisessa osatutkimuksessa jäljitetään erään sosiaalioikeudellisen termin, vaivaisen, historiaa 1800-luvun loppupuolella Kansalliskirjaston historiallisesta digitaalisesta sanomalehtikokoelmasta. Myötäesiintymäverkostojen avulla pyritään selvittämään miten se katosi oikeuskielestä tunnistamalla aineistosta hallinnollis-juridista rekisteriä vastaava rakenne ja seuraamalla vaivaisen asemaa siinä. Menetelmänä käytetyt myötäesiintymäverkostot eivät kuitenkaan edusta puhtaasti mitään tiettyä rekisteriä, vaan sekoittavat itseensä piirteitä erilaisista kategorioista, joilla kielen käyttöä on esimerkiksi tekstintutkimuksessa kuvattu. Tiheimmät verkostot muodostuvat rekisterien, genrejen, tekstityyppien ja sanastollisen koheesion yhteisvaikutuksesta. Osatutkimuksen tulokset antavat viitteitä siitä, että tämä on yleinen piirre monissa samankaltaisissa menetelmissä, mukaan lukien yleiset aihemallit

    Integrating Structure and Meaning: Using Holographic Reduced Representations to Improve Automatic Text Classification

    Get PDF
    Current representation schemes for automatic text classification treat documents as syntactically unstructured collections of words (Bag-of-Words) or `concepts' (Bag-of-Concepts). Past attempts to encode syntactic structure have treated part-of-speech information as another word-like feature, but have been shown to be less effective than non-structural approaches. We propose a new representation scheme using Holographic Reduced Representations (HRRs) as a technique to encode both semantic and syntactic structure, though in very different ways. This method is unique in the literature in that it encodes the structure across all features of the document vector while preserving text semantics. Our method does not increase the dimensionality of the document vectors, allowing for efficient computation and storage. We present the results of various Support Vector Machine classification experiments that demonstrate the superiority of this method over Bag-of-Concepts representations and improvement over Bag-of-Words in certain classification contexts
    corecore