50,409 research outputs found

    Spotlight on dream recall. The ages of dreams

    Get PDF
    Brain and sleep maturation covary across different stages of life. At the same time, dream generation and dream recall are intrinsically dependent on the development of neural systems. The aim of this paper is to review the existing studies about dreaming in infancy, adulthood, and the elderly stage of life, assessing whether dream mentation may reflect changes of the underlying cerebral activity and cognitive processes. It should be mentioned that some evidence from childhood investigations, albeit still weak and contrasting, revealed a certain correlation between cognitive skills and specific features of dream reports. In this respect, infantile amnesia, confabulatory reports, dream-reality discerning, and limitation in language production and emotional comprehension should be considered as important confounding factors. Differently, growing evidence in adults suggests that the neurophysiological mechanisms underlying the encoding and retrieval of episodic memories may remain the same across different states of consciousness. More directly, some studies on adults point to shared neural mechanisms between waking cognition and corresponding dream features. A general decline in the dream recall frequency is commonly reported in the elderly, and it is explained in terms of a diminished interest in dreaming and in its emotional salience. Although empirical evidence is not yet available, an alternative hypothesis associates this reduction to an age-related cognitive decline. The state of the art of the existing knowledge is partially due to the variety of methods used to investigate dream experience. Very few studies in elderly and no investigations in childhood have been performed to understand whether dream recall is related to specific electrophysiological pattern at different ages. Most of all, the lack of longitudinal psychophysiological studies seems to be the main issue. As a main message, we suggest that future longitudinal studies should collect dream reports upon awakening from different sleep states and include neurobiological measures with cognitive performance

    Graphene: Semantically-Linked Propositions in Open Information Extraction

    Full text link
    We present an Open Information Extraction (IE) approach that uses a two-layered transformation stage consisting of a clausal disembedding layer and a phrasal disembedding layer, together with rhetorical relation identification. In that way, we convert sentences that present a complex linguistic structure into simplified, syntactically sound sentences, from which we can extract propositions that are represented in a two-layered hierarchy in the form of core relational tuples and accompanying contextual information which are semantically linked via rhetorical relations. In a comparative evaluation, we demonstrate that our reference implementation Graphene outperforms state-of-the-art Open IE systems in the construction of correct n-ary predicate-argument structures. Moreover, we show that existing Open IE approaches can benefit from the transformation process of our framework.Comment: 27th International Conference on Computational Linguistics (COLING 2018

    Knowledge Base Population using Semantic Label Propagation

    Get PDF
    A crucial aspect of a knowledge base population system that extracts new facts from text corpora, is the generation of training data for its relation extractors. In this paper, we present a method that maximizes the effectiveness of newly trained relation extractors at a minimal annotation cost. Manual labeling can be significantly reduced by Distant Supervision, which is a method to construct training data automatically by aligning a large text corpus with an existing knowledge base of known facts. For example, all sentences mentioning both 'Barack Obama' and 'US' may serve as positive training instances for the relation born_in(subject,object). However, distant supervision typically results in a highly noisy training set: many training sentences do not really express the intended relation. We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision. We further improve on this approach by introducing the Semantic Label Propagation method, which uses the similarity between low-dimensional representations of candidate training instances, to extend the training set in order to increase recall while maintaining high precision. Our proposed strategy for generating training data is studied and evaluated on an established test collection designed for knowledge base population tasks. The experimental results show that the Semantic Label Propagation strategy leads to substantial performance gains when compared to existing approaches, while requiring an almost negligible manual annotation effort.Comment: Submitted to Knowledge Based Systems, special issue on Knowledge Bases for Natural Language Processin

    Mining web data for competency management

    Get PDF
    We present CORDER (COmmunity Relation Discovery by named Entity Recognition) an un-supervised machine learning algorithm that exploits named entity recognition and co-occurrence data to associate individuals in an organization with their expertise and associates. We discuss the problems associated with evaluating unsupervised learners and report our initial evaluation experiments

    Information Extraction in Illicit Domains

    Full text link
    Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have `long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18\% F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.Comment: 10 pages, ACM WWW 201
    • …
    corecore