910 research outputs found

    Should psychology ignore the language of the brain?

    No full text
    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior

    The neurocognition of syntactic processing

    Get PDF

    Biologically Plausible Connectionist Prediction of Natural Language Thematic Relations

    Get PDF
    In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.Fapesp - Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, Brazil[2008/08245-4

    Am I hurt?: Evaluating Psychological Pain Detection in Hindi Text using Transformer-based Models

    Get PDF
    The automated evaluation of pain is critical for developing effective pain management approaches that seek to alleviate while preserving patients’ functioning. Transformer-based models can aid in detecting pain from Hindi text data gathered from social media by leveraging their ability to capture complex language patterns and contextual information. By understanding the nuances and context of Hindi text, transformer models can effectively identify linguistic cues, sentiment and expressions associated with pain enabling the detection and analysis of pain-related content present in social media posts. The purpose of this research is to analyse the feasibility of utilizing NLP techniques to automatically identify pain within Hindi textual data, providing a valuable tool for pain assessment in Hindi-speaking populations. The research showcases the HindiPainNet model, a deep neural network that employs the IndicBERT model, classifying the dataset into two class labels {pain, no_pain} for detecting pain in Hindi textual data. The model is trained and tested using a novel dataset, दर्द-ए-शायरी (pronounced as Dard-e-Shayari) curated using posts from social media platforms. The results demonstrate the model’s effectiveness, achieving an accuracy of 70.5%. This pioneer research highlights the potential of utilizing textual data from diverse sources to identify and understand pain experiences based on psychosocial factors. This research could pave the path for the development of automated pain assessment tools that help medical professionals comprehend and treat pain in Hindi speaking populations. Additionally, it opens avenues to conduct further NLP-based multilingual pain detection research, addressing the needs of diverse language communities

    A broad-coverage distributed connectionist model of visual word recognition

    Get PDF
    In this study we describe a distributed connectionist model of morphological processing, covering a realistically sized sample of the English language. The purpose of this model is to explore how effects of discrete, hierarchically structured morphological paradigms, can arise as a result of the statistical sub-regularities in the mapping between word forms and word meanings. We present a model that learns to produce at its output a realistic semantic representation of a word, on presentation of a distributed representation of its orthography. After training, in three experiments, we compare the outputs of the model with the lexical decision latencies for large sets of English nouns and verbs. We show that the model has developed detailed representations of morphological structure, giving rise to effects analogous to those observed in visual lexical decision experiments. In addition, we show how the association between word form and word meaning also give rise to recently reported differences between regular and irregular verbs, even in their completely regular present-tense forms. We interpret these results as underlining the key importance for lexical processing of the statistical regularities in the mappings between form and meaning

    Semantic processing with and without awareness. Insights from computational linguistics and semantic priming.

    Get PDF
    During my PhD, I’ve explored how native speakers access semantic information from lexical stimuli, and weather consciousness plays a role in the process of meaning construction. In a first study, I exploited the metaphor linking time and space to assess the specific contribution of linguistically–coded information to the emergence of priming. In fact, time is metaphorically arranged on either the horizontal or the sagittal axis in space (Clark, 1973), but only the latter comes up in language (e.g., "a bright future in front of you"). In a semantic categorization task, temporal target words (e.g., earlier, later) were primed by spatial words that were processed either consciously (unmasked) or unconsciously (masked). With visible primes, priming was observed for both lateral and sagittal words; yet, only the latter ones led to a significant effect when the primes were masked. Thus, unconscious word processing may be limited to those aspects of meaning that emerge in language use. In a second series of experiments, I tried to better characterize these aspects by taking advantage of Distributional Semantic Models (DSMs; Marelli, 2017), which represent word meaning as vectors built upon word co–occurrences in large textual database. I compared state–of–the–art DSMs with Pointwise Mutual Information (PMI; Church & Hanks, 1990), a measure of local association between words that is merely based on their surface co–occurrence. In particular, I tested how the two indexes perform on a semantic priming dataset comprising visible and masked primes, and different stimulus onset asynchronies between the two stimuli. Subliminally, none of the predictor alone elicited significant priming, although participants who showed some residual prime visibility showed larger effect. Post-hoc analyses showed that for subliminal priming to emerge, the additive contribution of both PMI and DSM was required. Supraliminally, PMI outperforms DSM in the fit to the behavioral data. According to these results, what has been traditionally thought of as unconscious semantic priming may mostly rely on local associations based on shallow word cooccurrence. Of course, masked priming is only one possible way to model unconscious perception. In an attempt to provide converging evidence, I also tested overt and covert semantic facilitation by presenting prime words in the unattended vs. attended visual hemifield of brain–injured patients suffering from neglect. In seven sub–acute cases, data show more solid PMI–based than DSM–based priming in the unattended hemifield, confirming the results obtained from healthy participants. Finally, in a fourth work package, I explored the neural underpinnings of semantic processing as revealed by EEG (Kutas & Federmeier, 2011). As the behavioral results of the previous study were much clearer when the primes were visible, I focused on this condition only. Semantic congruency was dichotomized in order to compare the ERP evoked by related and unrelated pairs. Three different types of semantic similarity were taken into account: in a first category, primes and targets were often co–occurring but far in the DSM (e.g., cheese-mouse), while in a second category the two words were closed in the DSM, but not likely to co-occur (e.g., lamp-torch). As a control condition, we added a third category with pairs that were both high in PMI and close in DSMs (e.g., lemon-orange). Mirroring the behavioral results, we observed a significant PMI effect in the N400 time window; no such effect emerged for DSM. References Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational linguistics, 16(1), 22-29. Clark, H. H. (1973). Space, time, semantics, and the child. In Cognitive development and acquisition of language (pp. 27-63). Academic Press. Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual review of psychology, 62, 621-647. Marelli, M. (2017). Word-Embeddings Italian Semantic Spaces: a semantic model for psycholinguistic research. Psihologija, 50(4), 503-520. Commentat

    Lexical Complexity Prediction with Assembly Models

    Get PDF
    Tuning the complexity of one\u27s writing is essential to presenting ideas in a logical, intuitive manner to audiences. This paper describes a system submitted by team BigGreen to LCP 2021 for predicting the lexical complexity of English words in a given context. We assemble a feature engineering-based model and a deep neural network model with an underlying Transformer architecture based on BERT. While BERT itself performs competitively, our feature engineering-based model helps in extreme cases, eg. separating instances of easy and neutral difficulty. Our handcrafted features comprise a breadth of lexical, semantic, syntactic, and novel phonetic measures. Visualizations of BERT attention maps offer insight into potential features that Transformers models may implicitly learn when fine-tuned for the purposes of lexical complexity prediction. Our assembly technique performs reasonably well at predicting the complexities of single words, and we demonstrate how such techniques can be harnessed to perform well when on multi word expressions (MWEs) too

    Morphological Representations In Lexical Processing

    Get PDF
    This dissertation integrates insights from theoretical linguistics and the psycholinguistic literature through an investigation of the morphological representations involved in auditory lexical processing. Previous work in theoretical morphology, spoken word recognition, and morphological processing are considered together in generating hypotheses. Chapter 2 provides theoretical and methodological background. Theoretical linguistics is considered a subset of psycholinguistic inquiry. I argue that this perspective is beneficial to both subfields. Modality is a crucial theme: most work investigating morphological processing involves visual presentation, whereas this dissertation exclusively examines the auditory modality. Experimental work in this dissertation uses primed auditory lexical decision. Important considerations for this methodology are discussed in Chapter 2. Chapter 3 explores the role of morpho-phonological representations through a novel experimental design which examines the sensitivity of phonological rhyme priming to morphological structure, specifically, the extent to which stems of complex words are available for rhyme priming. Results suggest that phonological rhyme priming can facilitate phonological representations without facilitating syntactic representations, consistent with an architecture in which phonological and syntactic representations are separated. Furthermore, there is a directional asymmetry for the effect: stems in complex words are available for rhyme priming in targets but not primes. This asymmetry invites attention to the time-course of auditory morphological processing and a theoretical perspective in which syntactic and phonological recombination are considered separately. Chapter 4 concerns the processing of inflectional affixation. A distance manipulation is incorporated into two studies which compare word repetition priming to morphological stem priming. The results are informative about the time-course of the effects of representations involved with inflectional affixation. Furthermore, the results are consistent with abstract and episodic components of morphological priming which can be attributed to stem and recombination representations respectively. Finally, a morphological affix priming study focuses on the representation of the inflectional affix. Results are consistent with an account in which affixes are isolable representations in memory and therefore can be facilitated through identity priming. To summarise, by combining insights from theoretical linguistics and the psycholinguistic literature, this dissertation advances our understanding of the cognitive architecture of morphological representations and generates hypotheses for future research
    corecore