809 research outputs found

    Evidence for Letter-Specific Position Coding Mechanisms

    Get PDF
    International audienceThe perceptual matching (same-different judgment) paradigm was used to investigate precision in position coding for strings of letters, digits, and symbols. Reference and target stimuli were 6 characters long and could be identical or differ either by transposing two characters or substituting two characters. The distance separating the two characters was manipulated such that they could either be contiguous, separated by one intervening character, or separated by two intervening characters. Effects of type of character and distance were measured in terms of the difference between the transposition and substitution conditions (transposition cost). Error rates revealed that transposition costs were greater for letters than for digits, which in turn were greater than for symbols. Furthermore, letter stimuli showed a gradual decrease in transposition cost as the distance between the letters increased, whereas the only significant difference for digit and symbol stimuli arose between contiguous and non-contiguous changes, with no effect of distance on the non-contiguous changes. The results are taken as further evidence for letter-specific position coding mechanisms

    The consequences of literacy and schooling for parsing strings

    Get PDF
    Published online: 10 Apr 2017Word processing initially occurs through letter-by-letter parsing at early stages of reading development. Here we investigated the role of literacy in parsing both linguistic and non-linguistic strings, as well as the consequences of the possible changes brought by the acquisition of reading. Illiterates matched with schooled literates on socio-demographic and cognitive measures were presented with a character search task. Overall, literates performed better than illiterates in identifying constituents in linguistic and non-linguistic strings. Illiterates showed a similar performance for all types of strings. In contrast, literates showed a graded pattern with non-linguistic strings being processed much worse than linguistic strings. These results support domain-specific models of orthographic processing, and they suggest that visual word recognition is not fully parallel to visual object recognition. Most importantly, they demonstrate the impact of literacy on the ability of breaking down a word into its constituents.This research has been partially funded by grants PSI2015-65689-P and SEV-2015-0490 from the Spanish Government, AThEME-613465 from the European Union, ERC-AdG-295362 grant from the European Research Council, and PI2015-1-27 grant from the Eusko Jaurlaritza This project has been also supported by a 2016 BBVA Foundation Grant for Researchers and Cultural Creators awarded to the last author

    Neural correlates of confusability in recognition of morphologically complex Korean words

    Get PDF
    When people confuse and reject a non-word that is created by switching two adjacent letters from an actual word, is called the transposition confusability effect (TCE). The TCE is known to occur at the very early stages of visual word recognition with such unit exchange as letters or syllables, but little is known about the brain mechanisms of TCE. In this study, we examined the neural correlates of TCE and the effect of a morpheme boundary placement on TCE. We manipulated the placement of a morpheme boundary by exchanging places of two syllables embedded in Korean morphologically complex words made up of lexical morpheme and grammatical morpheme. In the two experimental conditions, the transposition syllable within-boundary condition (TSW) involved exchanging two syllables within the same morpheme, whereas the across-boundary condition (TSA) involved the exchange of syllables across the stem and grammatical morpheme boundary. During fMRI, participants performed the lexical decision task. Behavioral results revealed that the TCE was found in TSW condition, and the morpheme boundary, which is manipulated in TSA, modulated the TCE. In the fMRI results, TCE induced activation in the left inferior parietal lobe (IPL) and intraparietal sulcus (IPS). The IPS activation was specific to a TCE and its strength of activation was associated with task performance. Furthermore, two functional networks were involved in the TCE: the central executive network and the dorsal attention network. Morpheme boundary modulation suppressed the TCE by recruiting the prefrontal and temporal regions, which are the key regions involved in semantic processing. Our findings propose the role of the dorsal visual pathway in syllable position processing and that its interaction with other higher cognitive systems is modulated by the morphological boundary in the early phases of visual word recognition

    Does orthographic processing emerge rapidly after learning a new script?

    Get PDF
    Epub 2020 Aug 11Orthographic processing is characterized by location-invariant and location-specific processing (Grainger, 2018): (1) strings of letters are more vulnerable to transposition effects than the strings of symbols in same-different tasks (location-invariant processing); and (2) strings of letters, but not strings of symbols, show an initial position advantage in target-in-string identification tasks (location-specific processing). To examine the emergence of these two markers of orthographic processing, we conducted a same-different task and a target-in-string identification task with two unfamiliar scripts (pre-training experiments). Across six training sessions, participants learned to fluently read and write one of these scripts. The post-training experiments were parallel to the pre-training experiments. Results showed that the magnitude of the transposed-letter effect in the same-different task and the serial function in the target-in-string identification tasks were remarkably similar for the trained and untrained scripts. Thus, location-invariant and location-specific processing does not emerge rapidly after learning a new script; instead, they may require thorough experience with specific orthographic structures.This study was supported by the Spanish Ministry of Science, Innovation, and Universities (PRE2018-083922, PSI2017-86210-P) and by the Department of Innovation, Universities, Science and Digital Society of the Valencian Government (GV/2020/074

    The Impact of Text Orientation on Form Effects with Chinese, Japanese and English readers

    Get PDF
    Does visuospatial orientation influence form priming effects in parallel ways in Chinese and English? Given the differences in how orthographic symbols are presented in Chinese versus English, one might expect to find some differences in early word recognition processes and, hence, in the nature of form priming effects. According to perceptual learning accounts, form priming effects (i.e., “form” priming effects) should be influenced by text orientation (Dehaene, Cohen, Sigman, & Vinckier, 2005; Grainger & Holcomb, 2009). In contrast, Witzel, Qiao, and Forster’s (2011) abstract letter unit account proposes that the mechanism responsible for such effect acts at a totally abstract orthographic level (i.e., the visuospatial orientation is irrelevant to the nature of the relevant orthographic code). One goal of the present research was to determine whether or not one of these accounts could explain form priming effects in both languges. Chapter 2 (Yang, Chen, Spinelli & Lupker, 2019) expanded the debate between these positions beyond alphabetic scripts and the syllabic Kana script used by Witzel et al. (2011) to a logographic script (Chinese). I report four experiments with Chinese participants in this chapter. The experiments showed masked form priming effects with targets in four different orientations (left-to-right, top-to-bottom, right-to-left, and bottom-to-top), supporting Witzel et al.’s account. Chapter 3 (Yang, Hino, Chen, Yoshihara, Nakayama, Xue, & Lupker, in press) provided an evaluation of whether the backward priming effect obtained in Experiment 2.3 (i.e., backward primes and forward targets) is truly an orthographic effect or whether it may be either morphologically/meaning- or syllabically/phonologically-based. Five experiments, two involving phonologically-related primes and three involving meaning-related primes, produced no evidence that either of those factors contributed to the backward priming effect, implying that it truly is an orthographic effect. In Chapter 4 (Yang & Lupker, 2019), I examined whether text rotation to different degrees (e.g., 0°, 90°, and 180° rotations) modulated transposed-letter (TL) priming effects in two experiments with English participants. The sizes of the priming effects were similar for horizontal 0°, 90° rotated and 180° rotated words providing further support for abstract letter unit accounts of orthographic coding. These results support abstract letter/character unit accounts of form priming effects while failing to support perceptual learning accounts. Further, these results also indicate a language difference in that Chinese readers have more flexible (i.e., less precise) letter position coding than English readers, a fact that poses an interesting new challenge to existing orthographic coding theories

    Letter-similarity effects in braille word recognition

    Get PDF
    Letter-similarity effects are elusive with common words in lexical decision experiments: viotin and viocin (base word: violin) produce similar error rates and rejection latencies. However, they are robust for stimuli often presented with the same appearance (e.g., misspelled logotypes such as anazon [base word: amazon] produce more errors and longer latencies than atazon). Here, we examine whether letter-similarity effects occur in reading braille. The rationale is that braille is a writing system in which the sensory information is processed in qualitatively different ways than in visual reading: the form of the word’s letters is highly stable due to the standardisation of braille and the sensing of characters is transient and somewhat serial. Hence, we hypothesised that the letter similarity effect would be sizable with misspelled common words in braille, unlike the visual modality. To test this hypothesis, we conducted a lexical decision experiment with blind adult braille readers. Pseudowords were created by replacing one letter of a word with a tactually similar or dissimilar letter in braille following a tactile similarity matrix (e.g., [ausor] vs [aucor]; baseword: [autor]). Bayesian linear mixed-effects models showed that the responses to tactually similar pseudowords were less accurate than to tactually dissimilar pseudowords—the response times (RTs) showed a parallel trend. This finding supports the idea that, when reading braille, the mapping of input information onto abstract letter representations is done through a noisy channel

    Encoding order and developmental dyslexia:a family of skills predicting different orthographic components

    Get PDF
    We investigated order encoding in developmental dyslexia using a task that presented nonalphanumeric visual characters either simultaneously or sequentially—to tap spatial and temporal order encoding, respectively—and asked participants to reproduce their order. Dyslexic participants performed poorly in the sequential condition, but normally in the simultaneous condition, except for positions most susceptible to interference. These results are novel in demonstrating a selective difficulty with temporal order encoding in a dyslexic group. We also tested the associations between our order reconstruction tasks and: (a) lexical learning and phonological tasks; and (b) different reading and spelling tasks. Correlations were extensive when the whole group of participants was considered together. When dyslexics and controls were considered separately, different patterns of association emerged between orthographic tasks on the one side and tasks tapping order encoding, phonological processing, and written learning on the other. These results indicate that different skills support different aspects of orthographic processing and are impaired to different degrees in individuals with dyslexia. Therefore, developmental dyslexia is not caused by a single impairment, but by a family of deficits loosely related to difficulties with order. Understanding the contribution of these different deficits will be crucial to deepen our understanding of this disorder

    Letter-similarity effects in braille word recognition.

    Get PDF
    Letter-similarity effects are elusive with common words in lexical decision experiments: viotin and viocin (base word: violin) produce similar error rates and rejection latencies. However, they are robust for stimuli often presented with the same appearance (e.g., misspelled logotypes such as anazon [base word: amazon] produce more errors and longer latencies than atazon). Here, we examine whether letter-similarity effects occur in reading braille. The rationale is that braille is a writing system in which the sensory information is processed in qualitatively different ways than in visual reading: the form of the word's letters is highly stable due to the standardization of braille and the sensing of characters is transient and somewhat serial. Hence, we hypothesized that the letter similarity effect would be sizeable with misspelled common words in braille, unlike the visual modality. To test this hypothesis, we conducted a lexical decision experiment with blind adult braille readers. Pseudowords were created by replacing one letter of a word with a tactually-similar or dissimilar letter in braille following a tactile similarity matrix (Baciero et al., 2021a; e.g., [ausor] vs. [aucor]; baseword: [autor]). Bayesian linear mixed-effects models showed that the responses to tactually-similar pseudowords were less accurate than to tactually-dissimilar pseudowords-the RTs showed a parallel trend. This finding supports the idea that, when reading braille, the mapping of input information onto abstract letter representations is done through a noisy channel (Norris & Kinoshita, 2012)

    Professional or amateur? The phonological output buffer as a working memory operator

    Get PDF
    The Phonological Output Buffer (POB) is thought to be the stage in language production where phonemes are held in working memory and assembled into words. The neural implementation of the POB remains unclear despite a wealth of phenomenological data. Individuals with POB impairment make phonological errors when they produce words and non-words, including phoneme omissions, insertions, transpositions, substitutions and perseverations. Errors can apply to different kinds and sizes of units, such as phonemes, number words, morphological affixes, and function words, and evidence from POB impairments suggests that units tend to substituted with units of the same kind-e.g., numbers with numbers and whole morphological affixes with other affixes. This suggests that different units are processed and stored in the POB in the same stage, but perhaps separately in different mini-stores. Further, similar impairments can affect the buffer used to produce Sign Language, which raises the question of whether it is instantiated in a distinct device with the same design. However, what appear as separate buffers may be distinct regions in the activity space of a single extended POB network, connected with a lexicon network. The self-consistency of this idea can be assessed by studying an autoassociative Potts network, as a model of memory storage distributed over several cortical areas, and testing whether the network can represent both units of word and signs, reflecting the types and patterns of errors made by individuals with POB impairment
    • …
    corecore