219 research outputs found

    Semantic processing with and without awareness. Insights from computational linguistics and semantic priming.

    Get PDF
    During my PhD, I’ve explored how native speakers access semantic information from lexical stimuli, and weather consciousness plays a role in the process of meaning construction. In a first study, I exploited the metaphor linking time and space to assess the specific contribution of linguistically–coded information to the emergence of priming. In fact, time is metaphorically arranged on either the horizontal or the sagittal axis in space (Clark, 1973), but only the latter comes up in language (e.g., "a bright future in front of you"). In a semantic categorization task, temporal target words (e.g., earlier, later) were primed by spatial words that were processed either consciously (unmasked) or unconsciously (masked). With visible primes, priming was observed for both lateral and sagittal words; yet, only the latter ones led to a significant effect when the primes were masked. Thus, unconscious word processing may be limited to those aspects of meaning that emerge in language use. In a second series of experiments, I tried to better characterize these aspects by taking advantage of Distributional Semantic Models (DSMs; Marelli, 2017), which represent word meaning as vectors built upon word co–occurrences in large textual database. I compared state–of–the–art DSMs with Pointwise Mutual Information (PMI; Church & Hanks, 1990), a measure of local association between words that is merely based on their surface co–occurrence. In particular, I tested how the two indexes perform on a semantic priming dataset comprising visible and masked primes, and different stimulus onset asynchronies between the two stimuli. Subliminally, none of the predictor alone elicited significant priming, although participants who showed some residual prime visibility showed larger effect. Post-hoc analyses showed that for subliminal priming to emerge, the additive contribution of both PMI and DSM was required. Supraliminally, PMI outperforms DSM in the fit to the behavioral data. According to these results, what has been traditionally thought of as unconscious semantic priming may mostly rely on local associations based on shallow word cooccurrence. Of course, masked priming is only one possible way to model unconscious perception. In an attempt to provide converging evidence, I also tested overt and covert semantic facilitation by presenting prime words in the unattended vs. attended visual hemifield of brain–injured patients suffering from neglect. In seven sub–acute cases, data show more solid PMI–based than DSM–based priming in the unattended hemifield, confirming the results obtained from healthy participants. Finally, in a fourth work package, I explored the neural underpinnings of semantic processing as revealed by EEG (Kutas & Federmeier, 2011). As the behavioral results of the previous study were much clearer when the primes were visible, I focused on this condition only. Semantic congruency was dichotomized in order to compare the ERP evoked by related and unrelated pairs. Three different types of semantic similarity were taken into account: in a first category, primes and targets were often co–occurring but far in the DSM (e.g., cheese-mouse), while in a second category the two words were closed in the DSM, but not likely to co-occur (e.g., lamp-torch). As a control condition, we added a third category with pairs that were both high in PMI and close in DSMs (e.g., lemon-orange). Mirroring the behavioral results, we observed a significant PMI effect in the N400 time window; no such effect emerged for DSM. References Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational linguistics, 16(1), 22-29. Clark, H. H. (1973). Space, time, semantics, and the child. In Cognitive development and acquisition of language (pp. 27-63). Academic Press. Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual review of psychology, 62, 621-647. Marelli, M. (2017). Word-Embeddings Italian Semantic Spaces: a semantic model for psycholinguistic research. Psihologija, 50(4), 503-520. Commentat

    Simulating Speed in Language: Contributions from vision, audition and action

    Get PDF
    Embodied theories propose that understanding meaning in language requires the mental simulation of entities being referred to. These mental simulations would make use of the same modality-specific systems involved in perceiving and acting upon such entities in the world, grounding language in the real world. However, embodied theories are currently underspecified in terms of how much information from an event is contained in mental simulations, and what features of experience are included. The thesis addresses comprehension of language that describes speed of events. Investigating speed allows embodied theories to be extended to a more complex feature of events. Further, speed is a fine-grained feature and thus testing an embodied theory of speed will reveal whether or not mental simulations include the fine details of real-world experience. Within the thesis four main methods of investigation were used, assessing simulation of speed with different types of speed language under different conditions: behavioural testing combining speed in language with speed in perception and action, eye-tracking investigating whether eye-movements to a visual scene are affected by speed in sentences, a psychophysics paradigm assessing whether speed in language affects visual perception processes, and finally, as a crucial test of embodiment, whether or not Parkinson’s patients, who have difficulty moving speedily, also have problems with comprehension of speed language. The main findings of the thesis are that: (1) speed, a fine-grained and abstract dimension, is simulated during comprehension, (2) simulations are dynamic and context-dependent, and (3) simulations of speed are specific to biological motion and can encode specific effectors used in an action. These results help to specify current embodied theories in terms of what the nature of simulations are and what factors they are sensitive to, in addition to broadly providing support for the sharing of cognitive/neural processes between language, action and perception

    Representing meaning: a feature-based model of object and action words

    Get PDF
    The representation of word meaning has received substantial attention in the psycholinguistic literature over the past decades, yet the vast majority of studies have been limited to words referring to concrete objects. The aim of the present work is to provide a theoretically and neurally plausible model of lexical-semantic representations, not only for words referring to concrete objects but also for words referring to actions and events using a common set of assumptions across domains. In order to do so, features of meaning are generated by naïve speakers, and used as a window into important aspects of representation. A first series of analyses test how the meanings of words of different types are reflected in features associated with different modalities of sensory-motor experience, and how featural properties may be related to patterns of impairment in language-disordered populations. The features of meaning are then used to generate a model of lexical-semantic similarity, in which these different types of words are represented within a single system, under the assumption that lexical-semantic representations serve to provide an interface between conceptual knowledge derived in part from sensory-motor experience, and other linguistic information such as syntax, phonology and orthography. Predictions generated from this model are tested in a series of behavioural experiments designed to test two main questions: whether similarity measures based on speaker- generated features can predict fine-grained semantic similarity effects, and whether the predictive quality of the model is comparable for words referring to objects and words referring to actions. The results of five behavioural experiments consistently reveal graded semantic effects as predicted by the feature-based model, of similar magnitude for objects and actions. The model's fine-grained predictive performance is also found to be superior to other word-based models of representation (Latent Semantic Analysis, and similarity measures derived from Wordnet)

    Computational explorations of semantic cognition

    Get PDF
    Motivated by the widespread use of distributional models of semantics within the cognitive science community, we follow a computational modelling approach in order to better understand and expand the applicability of such models, as well as to test potential ways in which they can be improved and extended. We review evidence in favour of the assumption that distributional models capture important aspects of semantic cognition. We look at the models’ ability to account for behavioural data and fMRI patterns of brain activity, and investigate the structure of model-based, semantic networks. We test whether introducing affective information, obtained from a neural network model designed to predict emojis from co-occurring text, can improve the performance of linguistic and linguistic-visual models of semantics, in accounting for similarity/relatedness ratings. We find that adding visual and affective representations improves performance, especially for concrete and abstract words, respectively. We describe a processing model based on distributional semantics, in which activation spreads throughout a semantic network, as dictated by the patterns of semantic similarity between words. We show that the activation profile of the network, measured at various time points, can account for response time and accuracies in lexical and semantic decision tasks, as well as for concreteness/imageability and similarity/relatedness ratings. We evaluate the differences between concrete and abstract words, in terms of the structure of the semantic networks derived from distributional models of semantics. We examine how the structure is related to a number of factors that have been argued to differ between concrete and abstract words, namely imageability, age of acquisition, hedonic valence, contextual diversity, and semantic diversity. We use distributional models to explore factors that might be responsible for the poor linguistic performance of children suffering from Developmental Language Disorder. Based on the assumption that certain model parameters can be given a psychological interpretation, we start from “healthy” models, and generate “lesioned” models, by manipulating the parameters. This allows us to determine the importance of each factor, and their effects with respect to learning concrete vs abstract words

    EXPRESS: Orthographic and feature-level contributions to letter identification

    Get PDF
    Word recognition is facilitated by primes containing visually similar letters (dentjst-dentist, Marcet & Perea, 2017), suggesting that letter identities are encoded with initial uncertainty. Orthographic knowledge also guides letter identification, as readers are more accurate at identifying letters in words compared to pseudowords (Reicher, 1969; Wheeler, 1970). We investigated how higher-level orthographic knowledge and low-level visual feature analysis operate in combination during letter identification. We conducted a Reicher-Wheeler task to compare readers' ability to discriminate between visually similar and dissimilar letters across different orthographic contexts (words, pseudowords, and consonant strings). Orthographic context and visual similarity had independent effects on letter identification, and there was no interaction between these factors. The magnitude of these effects indicated that higher-level orthographic information plays a greater role than lower-level visual feature information in letter identification. We propose that readers use orthographic knowledge to refine potential letter candidates while visual feature information is accumulated. This combination of higher-level knowledge and low-level feature analysis may be essential in permitting the flexibility required to identify visual variations of the same letter (e.g. N-n) whilst maintaining enough precision to tell visually similar letters apart (e.g. n-h). These results provide new insights on the integration of visual and linguistic information and highlight the need for greater integration between models of reading and visual processing. This study was pre-registered on the Open Science Framework. Pre-registration, stimuli, instructions, trial-level data, and analysis scripts are openly available (https://osf.io/p4q9u/)

    Experientially grounded language production: Advancing our understanding of semantic processing during lexical selection

    Get PDF
    Der Prozess der lexikalischen Selektion, d.h. die Auswahl der richtigen Wörter zur Übermittlung einer intendierten Botschaft, ist noch nicht hinreichend verstanden. Insbesondere wurde kaum erforscht, inwiefern Bedeutungsaspekte, welche in sensomotorischen Erfahrungen gründen, diesen Prozess der Sprachproduktion beeinflussen. Die Rolle dieser Bedeutungsaspekte wurde mit zwei Studien untersucht, in denen Probanden Sätze vervollständigten. In Studie 1 wurde der visuelle Eindruck der Satzfragmente manipuliert, so dass die Sätze auf- oder absteigend am Bildschirm erschienen. In Studie 2 mussten die Probanden Kopfbewegungen nach oben oder unten ausführen, während sie die Satzfragmente hörten. Wir untersuchten, ob räumliche Aspekte der produzierten Wörter durch die räumlichen Manipulationen sowie die räumlichen Eigenschaften der präsentierten Satzfragmente beeinflusst werden. Die vertikale visuelle Manipulation in Studie 1 wirkte sich nicht auf die räumlichen Attribute der produzierten Wörter aus. Die Kopfbewegungen in Studie 2 führten zu einem solchen Effekt – bei Kopfbewegungen nach oben waren die Referenten der produzierten Wörter weiter oben im Raum angesiedelt als nach Bewegungen nach unten (und anders herum). Darüber hinaus war dieser Effekt stärker, je ausgeprägter die interozeptive Sensibilität der Probanden war. Zudem beeinflussten die räumlichen Aspekte der Satzfragmente die räumlichen Eigenschaften der produzierten Wörter in beiden Studien. Somit zeigt diese Arbeit, dass in der Erfahrung basierende Bedeutungsanteile, welche entweder in Sprache eingebettet sind oder durch körperliche Aktivität reaktiviert werden, die Auswahl der Wörter beim Sprechen beeinflussen und dass interindividuelle Unterschiede diesen Effekt modulieren. Die Befunde werden in Bezug zu Theorien der Semantik gesetzt. Darüber hinaus wird das Methodenrepertoire erweitert, indem mit Studie 3 ein Ansatz für die Durchführung von Online-Sprachproduktionsexperimenten mit Bildbenennung vorgestellt wird.The process of lexical selection, i.e. producing the right words to get an intended message across, is not well understood. Specifically, meaning aspects grounded in sensorimotor experiences and their role during lexical selection have not been investigated widely. Here, we investigated the role of experientially grounded meaning aspects with two studies in which participants had to produce a noun to complete sentences which described sceneries. In Study 1, the visual appearance of sentence fragments was manipulated and they seemed to move upwards or downwards on screen. In Study 2, participants moved their head up- or downwards while listening to sentence fragments. We investigated whether the spatial properties of the freely chosen nouns are influenced by the spatial manipulations as well as by the spatial properties of the sentences. The vertical visual manipulation used in Study 1 did not influence the spatial properties of the produced words. However, the body movements in Study 2 influenced participants’ lexical choices, i.e. after up-movements the referents of the produced words were higher up compared to after downward movements (and vice verse). Furthermore, there was an increased effect of movement on the spatial properties of the produced nouns with higher levels of participants’ interoceptive sensibility. Additionally, the spatial properties of the stimulus sentences influenced the spatial properties of the produced words in both studies. Thus, experientially grounded meaning aspects which are either embedded in text or reactivated via bodily manipulations may influence which words we chose when speaking, and interindividual differences may moderate these effects. The findings are related to current theories of semantics. Furthermore, this dissertation enhances the methodological repertoire of language production researchers by showing how language production studies with overt articulation in picture naming tasks can be run online (Study 3)

    Modelling the acquisition of natural language categories

    Get PDF
    The ability to reason about categories and category membership is fundamental to human cognition, and as a result a considerable amount of research has explored the acquisition and modelling of categorical structure from a variety of perspectives. These range from feature norming studies involving adult participants (McRae et al. 2005) to long-term infant behavioural studies (Bornstein and Mash 2010) to modelling experiments involving artificial stimuli (Quinn 1987). In this thesis we focus on the task of natural language categorisation, modelling the cognitively plausible acquisition of semantic categories for nouns based on purely linguistic input. Focusing on natural language categories and linguistic input allows us to make use of the tools of distributional semantics to create high-quality representations of meaning in a fully unsupervised fashion, a property not commonly seen in traditional studies of categorisation. We explore how natural language categories can be represented using distributional models of semantics; we construct concept representations for corpora and evaluate their performance against psychological representations based on human-produced features, and show that distributional models can provide a high-quality substitute for equivalent feature representations. Having shown that corpus-based concept representations can be used to model category structure, we turn our focus to the task of modelling category acquisition and exploring how category structure evolves over time. We identify two key properties necessary for cognitive plausibility in a model of category acquisition, incrementality and non-parametricity, and construct a pair of models designed around these constraints. Both models are based on a graphical representation of semantics in which a category represents a densely connected subgraph. The first model identifies such subgraphs and uses these to extract a flat organisation of concepts into categories; the second uses a generative approach to identify implicit hierarchical structure and extract an hierarchical category organisation. We compare both models against existing methods of identifying category structure in corpora, and find that they outperform their counterparts on a variety of tasks. Furthermore, the incremental nature of our models allows us to predict the structure of categories during formation and thus to more accurately model category acquisition, a task to which batch-trained exemplar and prototype models are poorly suited

    The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study

    Get PDF
    Carminati MN, Knoeferle P. The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study. Presented at the Architectures and Mechanisms of Language and Processing (AMLaP), Riva del Garda, Italy

    Embodied Processing at Six Linguistic Granularity Levels: A Consensus Paper

    Get PDF
    Language processing is influenced by sensorimotor experiences. Here, we review behavioral evidence for embodied and grounded influences in language processing across six linguistic levels of granularity. We examine (a) sub-word features, discussing grounded influences on iconicity (systematic associations between word form and meaning); (b) words, discussing boundary conditions and generalizations for the simulation of color, sensory modality, and spatial position; (c) sentences, discussing boundary conditions and applications of action direction simulation; (d) texts, discussing how the teaching of simulation can improve comprehension in beginning readers; (e) conversations, discussing how multi-modal cues improve turn taking and alignment; and (f) text corpora, discussing how distributional semantic models can reveal how grounded and embodied knowledge is encoded in texts. These approaches are converging on a convincing account of the psychology of language, but at the same time, there are important criticisms of the embodied approach and of specific experimental paradigms. The surest way forward requires the adoption of a wide array of scientific methods. By providing complimentary evidence, a combination of multiple methods on various levels of granularity can help us gain a more complete understanding of the role of embodiment and grounding in language processing
    corecore