6 research outputs found

    Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?

    Full text link
    Despite the number of NLP studies dedicated to thematic fit estimation, little attention has been paid to the related task of composing and updating verb argument expectations. The few exceptions have mostly modeled this phenomenon with structured distributional models, implicitly assuming a similarly structured representation of events. Recent experimental evidence, however, suggests that human processing system could also exploit an unstructured "bag-of-arguments" type of event representation to predict upcoming input. In this paper, we re-implement a traditional structured model and adapt it to compare the different hypotheses concerning the degree of structure in our event knowledge, evaluating their relative performance in the task of the argument expectations update.Comment: conference paper, IWC

    Dissociable electrophysiological measures of natural language processing reveal differences in speech comprehension strategy in healthy ageing

    Get PDF
    Healthy ageing leads to changes in the brain that impact upon sensory and cognitive processing. It is not fully clear how these changes affect the processing of everyday spoken language. Prediction is thought to play an important role in language comprehension, where information about upcoming words is pre-activated across multiple representational levels. However, evidence from electrophysiology suggests differences in how older and younger adults use context-based predictions, particularly at the level of semantic representation. We investigate these differences during natural speech comprehension by presenting older and younger subjects with continuous, narrative speech while recording their electroencephalogram. We use time-lagged linear regression to test how distinct computational measures of (1) semantic dissimilarity and (2) lexical surprisal are processed in the brains of both groups. Our results reveal dissociable neural correlates of these two measures that suggest differences in how younger and older adults successfully comprehend speech. Specifically, our results suggest that, while younger and older subjects both employ context-based lexical predictions, older subjects are significantly less likely to pre-activate the semantic features relating to upcoming words. Furthermore, across our group of older adults, we show that the weaker the neural signature of this semantic pre-activation mechanism, the lower a subject's semantic verbal fluency score. We interpret these findings as prediction playing a generally reduced role at a semantic level in the brains of older listeners during speech comprehension and that these changes may be part of an overall strategy to successfully comprehend speech with reduced cognitive resources

    Semantic processing with and without awareness. Insights from computational linguistics and semantic priming.

    Get PDF
    During my PhD, I’ve explored how native speakers access semantic information from lexical stimuli, and weather consciousness plays a role in the process of meaning construction. In a first study, I exploited the metaphor linking time and space to assess the specific contribution of linguistically–coded information to the emergence of priming. In fact, time is metaphorically arranged on either the horizontal or the sagittal axis in space (Clark, 1973), but only the latter comes up in language (e.g., "a bright future in front of you"). In a semantic categorization task, temporal target words (e.g., earlier, later) were primed by spatial words that were processed either consciously (unmasked) or unconsciously (masked). With visible primes, priming was observed for both lateral and sagittal words; yet, only the latter ones led to a significant effect when the primes were masked. Thus, unconscious word processing may be limited to those aspects of meaning that emerge in language use. In a second series of experiments, I tried to better characterize these aspects by taking advantage of Distributional Semantic Models (DSMs; Marelli, 2017), which represent word meaning as vectors built upon word co–occurrences in large textual database. I compared state–of–the–art DSMs with Pointwise Mutual Information (PMI; Church & Hanks, 1990), a measure of local association between words that is merely based on their surface co–occurrence. In particular, I tested how the two indexes perform on a semantic priming dataset comprising visible and masked primes, and different stimulus onset asynchronies between the two stimuli. Subliminally, none of the predictor alone elicited significant priming, although participants who showed some residual prime visibility showed larger effect. Post-hoc analyses showed that for subliminal priming to emerge, the additive contribution of both PMI and DSM was required. Supraliminally, PMI outperforms DSM in the fit to the behavioral data. According to these results, what has been traditionally thought of as unconscious semantic priming may mostly rely on local associations based on shallow word cooccurrence. Of course, masked priming is only one possible way to model unconscious perception. In an attempt to provide converging evidence, I also tested overt and covert semantic facilitation by presenting prime words in the unattended vs. attended visual hemifield of brain–injured patients suffering from neglect. In seven sub–acute cases, data show more solid PMI–based than DSM–based priming in the unattended hemifield, confirming the results obtained from healthy participants. Finally, in a fourth work package, I explored the neural underpinnings of semantic processing as revealed by EEG (Kutas & Federmeier, 2011). As the behavioral results of the previous study were much clearer when the primes were visible, I focused on this condition only. Semantic congruency was dichotomized in order to compare the ERP evoked by related and unrelated pairs. Three different types of semantic similarity were taken into account: in a first category, primes and targets were often co–occurring but far in the DSM (e.g., cheese-mouse), while in a second category the two words were closed in the DSM, but not likely to co-occur (e.g., lamp-torch). As a control condition, we added a third category with pairs that were both high in PMI and close in DSMs (e.g., lemon-orange). Mirroring the behavioral results, we observed a significant PMI effect in the N400 time window; no such effect emerged for DSM. References Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational linguistics, 16(1), 22-29. Clark, H. H. (1973). Space, time, semantics, and the child. In Cognitive development and acquisition of language (pp. 27-63). Academic Press. Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual review of psychology, 62, 621-647. Marelli, M. (2017). Word-Embeddings Italian Semantic Spaces: a semantic model for psycholinguistic research. Psihologija, 50(4), 503-520. Commentat

    Testing Low-Frequency Neural Activity in Sentence Understanding

    Full text link
    Human language has the unique characteristic where we can create infinite and novel phrases or sentences; this stems from the ability of composition, which allows us to combine smaller units into bigger meaningful units. Composition involves us following syntactic rules stored in memory and building well-formed structures incrementally. Research has shown that neural circuits can be associated with cognitive faculties such as memory and language and there is evidence indicating where and when the neural indices of the processing of composition are. However, it is not yet clear "how" neural circuits actually implement compositional processes. This dissertation aims to probe "how" composition of meaning is represented by neural circuits by investigating the role of low-frequency neural activity in carrying out composition. Neuroelectric signals were recorded with Electroencephalography (EEG) to examine the functional interpretation of low-frequency neural activity in the so-called delta band of 0.5 to 3 Hz. Activities in this band have been associated with the processing of syntactic structures (Ding et al. 2016). First, whether these activities are indeed associated with hierarchy remains under debate. This dissertation uses a novel condition in which the same words are presented, but their order is changed to remove the syntactic structure. Only entrainment with syllables was found in this "reversed" condition, supporting the hypothesis that neural activities in the delta band entrain to abstract syntactic structures. Second, we test the timing for language users to combine words and comprehend sentences. How comprehension correlates with this low-frequency neural activity and whether it represents endogenous neural response or evoked response remains unclear. This dissertation manipulates the length of syllables and regularity between syllables to test the hypotheses. The results support the view that this neural activity reflects endogenous response and suggest that it reflects top-down processing. Third, what semantic information modulates this low-frequency neural activity is unknown. This dissertation examines several semantic variables typically associated with different aspects of semantic processing. The stimuli are created by varying the statistical association between words, world knowledge, and the conceptual results of semantic composition. The current results suggest that low-frequency neural activity is not driven by semantic processing. Based on the above findings, we propose that neural activities in the delta band reflect top-down predictive processing that involves syntactic information directly but not semantic information.PHDLinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169907/1/chiawenl_1.pd
    corecore