683 research outputs found

    Word Superiority Effect in Bilingual Lexical Decision

    Get PDF
    The present study is a part of a larger-scale research, in which the temporal characteristics of written word recognition of bilinguals are studied. The research goal of this lexical decision test is to gain information about the temporal characteristics of recognition at the orthographic, phonological and semantic levels of processing. The research questions concern the temporal characteristics as well as the ERP components of bilingual written word recognition. 23 Hungarian-English bilingual participants were tested in the Electroencephalogram laboratory of the University of Pannonia. All of them have C1 level English proficiency and use English at work and in their everyday lives on a daily basis. The results show different patterns for real word and non-word processing in the parietal-occipital area in the early (150-200 ms) and late (200-250 ms) phases of N170 ERP component, which is the perceptual phase of recognition. It means that word recognition starts at as early as 200-250 ms from the onset of the stimulus by the orthographic-phonological processing. However, at this level participants can only identify whether the word is real or not, but not whether it is Hungarian or English

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments

    Get PDF
    AbstractIn this paper we show that a corpus of a few thousand Wikipedia articles about concrete or visualizable concepts can be used to produce a low-dimensional semantic feature representation of those concepts. The purpose of such a representation is to serve as a model of the mental context of a subject during functional magnetic resonance imaging (fMRI) experiments. A recent study by Mitchell et al. (2008) [19] showed that it was possible to predict fMRI data acquired while subjects thought about a concrete concept, given a representation of those concepts in terms of semantic features obtained with human supervision. We use topic models on our corpus to learn semantic features from text in an unsupervised manner, and show that these features can outperform those in Mitchell et al. (2008) [19] in demanding 12-way and 60-way classification tasks. We also show that these features can be used to uncover similarity relations in brain activation for different concepts which parallel those relations in behavioral data from human subjects

    Visual Word Recognition Patterns of Hungarian-English Bilinguals – Homograph Effect in Bilingual Language Decision

    Get PDF
    The present study is part of a larger-scale research in which the processes of written word recognition are studied in bilinguals. The research goal of our lexical decision experiments is to gain information about the temporal characteristics of recognition at the orthographic, phonological, and semantic levels of processing. The research questions concern behavioral differences and the ERP components of recognizing English words, Hungarian words, and interlexical homographs. 23 Hungarian-English bilingual participants were tested in an Electroencephalogram laboratory. In recognition of Hungarian and English words and homographs, the mean response language per participant indicated high accuracy for both Hungarian and English conditions (96% and 98%, respectively). In contrast, the homographs are biased towards English responses (27% Hungarian response). The multiple comparisons confirmed no difference in the mean response times of Hungarian and English words, whereas the interlexical homographs produced around 150 ms longer responses. In recognition of Hungarian and English words, there was no difference between the two categories in the early recognition phases, corresponding with the orthographic-phonological level. However, the neural representation of the two languages differed, later reflecting the differences in semantic or decision-related processes. In the case of the Hungarian- English interlexical homographs, the ERP waveforms did not show significant differences between the items perceived as English or Hungarian. Although there is a difference between the brain activations in the temporal and frontal electrode sites, this difference is insignificant. These data coincide with the former findings related to the homograph effect (Navracsics & Sáry, 2013), which explains that participants are exposed to a greater cognitive burden in the recognition, and the reaction time is longer due to the fact that both lexicons are active

    An Investigation on the Cognitive Effects of Emoji Usage in Text

    Get PDF

    Cross-participant modelling based on joint or disjoint feature selection: an fMRI conceptual decoding study

    Get PDF
    Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights

    Effects of Trait Anxiety on Threatening Speech Processing: Implications for Models of Emotional Language and Anxiety

    Get PDF
    Speech can convey emotional meaning through different channels, two are regarded as particularly relevant in models of emotional language: prosody and semantics. These have been widely studied in terms of their production and processing aspects, but sometimes overlooking individual differences of listeners. The present thesis examines whether greater intrinsic levels of anxiety can affect threatening speech processing. Trait anxiety is the predisposition to increased cognitions such as worry (over-thinking of the future), and emotions such as angst (felling of discomfort and tension), and can be reflected by an overactive behavioural inhibition system. As a result, according to emotional language and anxiety models, emotional prosody/semantics and anxiety might have overlapping neural areas/routes and processing phases. Thus, threatening semantics or prosody could have differential effects on trait anxiety depending on the nature of this overlap. This problem is approached by using behavioural and electroencephalographic (EEG) measures. Three dichotic listening experiments demonstrate that, at the behavioural level, trait anxiety does not modulate lateralisation when stimuli convey threatening prosody, threatening semantics or both. However, these and another non-dichotic experiment indicate that greater anxiety induces substantially slower responses. An EEG experiment shows that this phenomenon has very clear neural signature at late processing phases (~600ms). Exploratory source localisation analyses indicate involvement of areas predicted by the models, including portions of limbic, temporal and prefrontal cortex. The proposed explanation is that threatening speech can induce anxious people to over-engage with stimuli, and this disrupts late-phase processes associated with orientation/deliberation, as proposed by anxiety models. This process is independent of information type until later phase occurring after speech comprehension (e.g. response preparation/execution). Given this, a new model of threatening language processing is proposed, which extends models of emotional language processing by incorporating an orientation/deliberation phase from anxiety models

    References

    Get PDF

    Intercepting the First Pass: Rapid Categorization is Suppressed for Unseen Stimuli

    Get PDF
    The operations and processes that the human brain employs to achieve fast visual categorization remain a matter of debate. A first issue concerns the timing and place of rapid visual categorization and to what extent it can be performed with an early feed-forward pass of information through the visual system. A second issue involves the categorization of stimuli that do not reach visual awareness. There is disagreement over the degree to which these stimuli activate the same early mechanisms as stimuli that are consciously perceived. We employed continuous flash suppression (CFS), EEG recordings, and machine learning techniques to study visual categorization of seen and unseen stimuli. Our classifiers were able to predict from the EEG recordings the category of stimuli on seen trials but not on unseen trials. Rapid categorization of conscious images could be detected around 100 ms on the occipital electrodes, consistent with a fast, feed-forward mechanism of target detection. For the invisible stimuli, however, CFS eliminated all traces of early processing. Our results support the idea of a fast mechanism of categorization and suggest that this early categorization process plays an important role in later, more subtle categorizations, and perceptual processes
    • …
    corecore