21,703 research outputs found
Recommended from our members
Effects of classification context on categorization in natural categories
The patterns of classification of borderline instances of eight common taxonomic categories were examined under three different instructional conditions to test two predictions: first, that lack of a specified context contributes to vagueness in categorization, and second, that altering the purpose of classification can lead to greater or lesser dependence on similarity in classification. The instructional conditions contrasted purely pragmatic with more technical/quasi-legal contexts as purposes for classification, and these were compared with a no-context control. The measures of category vagueness were between-subjects disagreement and within-subjects consistency, and the measures of similarity based categorization were category breadth and the correlation of instance categorization probability with mean rated typicality, independently measured in a neutral context. Contrary to predictions, none of the measures of vagueness, reliability, category breadth, or correlation with typicality were generally affected by the instructional setting as a function of pragmatic versus technical purposes. Only one subcondition, in which a situational context was implied in addition to a purposive context, produced a significant change in categorization. Further experiments demonstrated that the effect of context was not increased when participants talked their way through the task, and that a technical context did not elicit more all-or-none categorization than did a pragmatic context. These findings place an important boundary condition on the effects of instructional context on conceptual categorization
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm
NLP tasks are often limited by scarcity of manually annotated data. In social
media sentiment analysis and related tasks, researchers have therefore used
binarized emoticons and specific hashtags as forms of distant supervision. Our
paper shows that by extending the distant supervision to a more diverse set of
noisy labels, the models can learn richer representations. Through emoji
prediction on a dataset of 1246 million tweets containing one of 64 common
emojis we obtain state-of-the-art performance on 8 benchmark datasets within
sentiment, emotion and sarcasm detection using a single pretrained model. Our
analyses confirm that the diversity of our emotional labels yield a performance
improvement over previous distant supervision approaches.Comment: Accepted at EMNLP 2017. Please include EMNLP in any citations. Minor
changes from the EMNLP camera-ready version. 9 pages + references and
supplementary materia
Cultural variation in cognitive flexibility reveals diversity in the development of executive functions
Cognitive flexibility, the adaptation of representations and responses to new task demands, improves dramatically in early childhood. It is unclear, however, whether flexibility is a coherent, unitary cognitive trait, or is an emergent dimension of task-specific performance that varies across populations with divergent experiences. Three-to 5-year-old English-speaking U.S. children and Tswana-speaking South African children completed two distinct language-processing cognitive flexibility tests: the FIM-Animates, a word-learning test, and the 3DCCS, a rule-switching test. U.S. and South African children did not differ in word-learning flexibility but showed similar age-related increases. In contrast, U.S. preschoolers showed an age-related increase in rule-switching flexibility but South African children did not. Verbal recall explained additional variance in both tests but did not modulate the interaction between population sample (i.e., country) and task. We hypothesize that rule-switching flexibility might be more dependent upon particular kinds of cultural experiences, whereas word-learning flexibility is less cross-culturally variable
The role of phonology in visual word recognition: evidence from Chinese
Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- …