4,024 research outputs found

    Correlating neural and symbolic representations of language

    Full text link
    Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP. Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees. We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results. We then apply our methods to correlate neural representations of English sentences with their constituency parse trees.Comment: ACL 201

    Neural correlates of processing valence and arousal in affective words

    Get PDF
    Psychological frameworks conceptualize emotion along 2 dimensions, "valence" and "arousal." Arousal invokes a single axis of intensity increasing from neutral to maximally arousing. Valence can be described variously as a bipolar continuum, as independent positive and negative dimensions, or as hedonic value (distance from neutral). In this study, we used functional magnetic resonance imaging to characterize neural activity correlating with arousal and with distinct models of valence during presentation of affective word stimuli. Our results extend observations in the chemosensory domain suggesting a double dissociation in which subregions of orbitofrontal cortex process valence, whereas amygdala preferentially processes arousal. In addition, our data support the physiological validity of descriptions of valence along independent axes or as absolute distance from neutral but fail to support the validity of descriptions of valence along a bipolar continuum

    A broad-coverage distributed connectionist model of visual word recognition

    Get PDF
    In this study we describe a distributed connectionist model of morphological processing, covering a realistically sized sample of the English language. The purpose of this model is to explore how effects of discrete, hierarchically structured morphological paradigms, can arise as a result of the statistical sub-regularities in the mapping between word forms and word meanings. We present a model that learns to produce at its output a realistic semantic representation of a word, on presentation of a distributed representation of its orthography. After training, in three experiments, we compare the outputs of the model with the lexical decision latencies for large sets of English nouns and verbs. We show that the model has developed detailed representations of morphological structure, giving rise to effects analogous to those observed in visual lexical decision experiments. In addition, we show how the association between word form and word meaning also give rise to recently reported differences between regular and irregular verbs, even in their completely regular present-tense forms. We interpret these results as underlining the key importance for lexical processing of the statistical regularities in the mappings between form and meaning
    corecore