170,744 research outputs found

    Emotion words and categories: evidence from lexical decision

    Get PDF
    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion–frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems

    Phonological (un)certainty weights lexical activation

    Full text link
    Spoken word recognition involves at least two basic computations. First is matching acoustic input to phonological categories (e.g. /b/, /p/, /d/). Second is activating words consistent with those phonological categories. Here we test the hypothesis that the listener's probability distribution over lexical items is weighted by the outcome of both computations: uncertainty about phonological discretisation and the frequency of the selected word(s). To test this, we record neural responses in auditory cortex using magnetoencephalography, and model this activity as a function of the size and relative activation of lexical candidates. Our findings indicate that towards the beginning of a word, the processing system indeed weights lexical candidates by both phonological certainty and lexical frequency; however, later into the word, activation is weighted by frequency alone.Comment: 6 pages, 4 figures, accepted at: Cognitive Modeling and Computational Linguistics (CMCL) 201

    Constraint-Based Categorial Grammar

    Full text link
    We propose a generalization of Categorial Grammar in which lexical categories are defined by means of recursive constraints. In particular, the introduction of relational constraints allows one to capture the effects of (recursive) lexical rules in a computationally attractive manner. We illustrate the linguistic merits of the new approach by showing how it accounts for the syntax of Dutch cross-serial dependencies and the position and scope of adjuncts in such constructions. Delayed evaluation is used to process grammars containing recursive constraints.Comment: 8 pages, LaTe

    On the origin of the cumulative semantic inhibition effect

    Get PDF
    We report an extension of the cumulative semantic inhibition effect found by Howard, Nickels, Coltheart, and Cole-Virtue (2006). Using more sensitive statistical analyses, we found a significant variation in the magnitude of the effect across categories. This variation cannot be explained by the naming speed of each category. In addition, using a sub-sample of the data, a second cumulative effect arouse for newly-defined supra-categories, over and above the effect of the original ones. We discuss these findings in terms of the representations that drive lexical access, and interpret them as supporting featural or distributed hypotheses

    Coping with speaker-related variation via abstract phonemic categories

    Get PDF
    Listeners can cope with considerable variation in the way that different speakers talk. We argue here that they can do so because of a process of phonological abstraction in the speech-recognition system. We review evidence that listeners adjust the bounds of phonemic categories after only very limited exposure to a deviant realisation of a given phoneme. This learning can be talker-specific and is stable over time; further, the learning generalizes to previously unheard words containing the deviant phoneme. Together these results suggest that the learning involves adjustment of prelexical phonemic representations which mediate between the speech signal and the mental lexicon during word recognition. We argue that such an abstraction process is inconsistent with claims made by some recent models of language processing that the mental lexicon consists solely of multiple detailed traces of acoustic episodes. Simulations with a purely episodic model without functional prelexical abstraction confirm that such a model cannot account for the evidence on lexical generalization of perceptual learning. We conclude that abstract phonemic categories form a necessary part of lexical access, and that the ability to store talker-specific knowledge about those categories provides listeners with the means to deal with cross-talker variation

    The impact of dementia, age and sex on category fluency: Greater deficits in women with Alzheimer's disease

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/science/journal/00109452 Copyright Elsevier Masson DOI: 10.1016/j.cortex.2007.11.008A category specific effect in naming tasks has been reported in patients with Alzheimer's dementia. Nonetheless, naming tasks are frequently affected by methodological problems, e.g., ceiling effects for controls and “nuisance variables” that may confound results. Semantic fluency tasks could help to address some of these methodological difficulties, because they are not prone to producing ceiling effects and are less influenced by nuisance variables. One hundred and thirty-three participants (61 patients with probable AD; and 72 controls: 36 young and 36 elderly) were evaluated with semantic fluency tasks in 14 semantic categories. Category fluency was affected both by dementia and by age: while in nonliving-thing categories there were differences among the three groups, in living thing categories larger lexical categories produced bigger differences among groups. Sex differences in fluency emerged, but these were moderated both by age and by pathology. In particular, fluency was smaller in female than male Alzheimer patients for almost every subcategory.Peer reviewe

    Automated Hate Speech Detection and the Problem of Offensive Language

    Full text link
    A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.Comment: To appear in the Proceedings of ICWSM 2017. Please cite that versio
    corecore