591 research outputs found

    Constraints on letter-in-string identification in peripheral vision: effects of number of flankers and deployment of attention

    Get PDF
    International audienceEffects of non-adjacent flanking elements on crowding of letter stimuli were examined in experiments manipulating the number of flanking elements and the deployment of spatial attention.To this end, identification accuracy of single letters was compared with identification of letter targets surrounded by two, four, or six flanking elements placed symmetrically left and right of the target. Target stimuli were presented left or right of a central fixation, and appeared either unilaterally or with an equivalent number of characters in the contralat-eral visual field (bilateral presentation). Experiment 1A tested letter targets with random letter flankers, and Experiments 1B and 2 tested letter targets with Xs as flanking stimuli. The results revealed a number of flankers effect that extended beyond standard two-flanker crowding. Flanker interference was stronger with random letter flankers compared with homogeneous Xs, and performance was systematically better under unilateral presentation conditions compared with bilateral presentation. Furthermore, the difference between the zero-flanker and two-flanker conditions was significantly greater under bilateral presentation , whereas the difference between two-flankers and four-flankers did not differ across unilateral and bilateral presentation. The complete pattern of results can be captured by the independent contributions of excessive feature integration and deployment of spatial attention to letter-in-string visibility

    A Dual-Route Approach to Orthographic Processing

    Get PDF
    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes)

    Parallel graded attention in reading: A pupillometric study

    Get PDF
    There are roughly two lines of theory to account for recent evidence that word processing is influenced by adjacent orthographic information. One line assumes that multiple words can be processed simultaneously through a parallel graded distribution of visuo-spatial attention. The other line assumes that attention is strictly directed to single words, but that letter detectors are connected to both foveal and parafoveal feature detectors, as such driving parafoveal-foveal integrative effects. Putting these two accounts to the test, we build on recent research showing that the pupil responds to the brightness of covertly attended (i.e., without looking) locations in the visual field. Experiment 1 showed that foveal target word processing was facilitated by related parafoveal flanking words when these were positioned to the left and right of the target, but not when these were positioned above and below the target. Perfectly in line with this asymmetry, in Experiment 2 we found that the pupil size was contingent with the brightness of the locations of horizontally but not vertically aligned flankers, indicating that attentional resources were allocated to those words involved in the parafoveal-on-foveal effect. We conclude that orthographic parafoveal-on-foveal effects are driven by parallel graded attention

    What absent switch costs and mixing costs during bilingual language comprehension can tell us about language control.

    Get PDF
    Epub 2019 Mar 28.In the current study, we set out to investigate language control, which is the process that minimizes cross-language interference, during bilingual language comprehension. According to current theories of bilingual language comprehension, language-switch costs, which are a marker for reactive language control, should be observed. However, a closer look at the literature shows that this is not always the case. Furthermore, little to no evidence for language-mixing costs, which are a marker for proactive language control, has been observed in the bilingual language comprehension literature. This is in line with current theories of bilingual language comprehension, as they do not explicitly account for proactive language control. In the current study, we further investigated these two markers of language control and found no evidence for comprehension-based language-switch costs in six experiments, even though other types of switch costs were observed with the exact same setup (i.e., task-switch costs, stimulus modality-switch costs, and production-based language-switch costs). Furthermore, only one out of three experiments showed comprehension-based language-mixing costs, providing the first tentative evidence for proactive language control during bilingual language comprehension. The implications of the absence and occurrence of these costs are discussed in terms of processing speed and parallel language activation. (PsycINFO Database Record (c) 2019 APA, all rights reserved)This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 706128. This research was also supported by grants ANR-11-LABX-0036 (BLRI), ANR-16-CONV-0002 (ILCB), and ANR-11-IDEX-0001-02 from the French National Research Council (ANR)

    Spoken word recognition without a TRACE

    Get PDF
    International audienceHow do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time-(temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition-including visual word recognition-have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power

    Letter perception: from item-level ERPs to computational models

    No full text
    ISBN : 978-2-9532965-0-1In the present study, online measures of letter identification were used to test computational models of letter perception. Event-related potentials (ERPs) were recorded to letters and pseudo-letters revealing a transition from feature analysis to letter identification in the 100-200 ms time window. Measures indexing this transition were then computed at the level of individual letters. Simulations with several versions of an interactive-activation model of letter perception were fitted with these item-level ERP measures. The results are in favor of a model of letter perception with feedforward excitatory connections from the feature to the letter levels, lateral inhibition at the letter level, and excitatory feedback from the letter to the feature levels

    Deciphering CAPTCHAs: What a Turing Test Reveals about Human Cognition

    Get PDF
    Turning Turing's logic on its head, we used widespread letter-based Turing Tests found on the internet (CAPTCHAs) to shed light on human cognition. We examined the basis of the human ability to solve CAPTCHAs, where machines fail. We asked whether this is due to our use of slow-acting inferential processes that would not be available to machines, or whether fast-acting automatic orthographic processing in humans has superior robustness to shape variations. A masked priming lexical decision experiment revealed efficient processing of CAPTCHA words in conditions that rule out the use of slow inferential processing. This shows that the human superiority in solving CAPTCHAs builds on a high degree of invariance to location and continuous transforms, which is achieved during the very early stages of visual word recognition in skilled readers

    OB1-reader:A model of word recognition and eye movements in text reading

    Get PDF
    Decades of reading research have led to sophisticated accounts of single-word recognition and, in parallel, accounts of eye-movement control in text reading. Although these two endeavors have strongly advanced the field, their relative independence has precluded an integrated account of the reading process. To bridge the gap, we here present a computational model of reading, OB1-reader, which integrates insights from both literatures. Key features of OB1 are as follows: (1) parallel processing of multiple words, modulated by an attentional window of adaptable size; (2) coding of input through a layer of open bigram nodes that represent pairs of letters and their relative position; (3) activation of word representations based on constituent bigram activity, competition with other word representations and contextual predictability; (4) mapping of activated words onto a spatiotopic sentence-level representation to keep track of word order; and (5) saccade planning, with the saccade goal being dependent on the length and activation of surrounding word units, and the saccade onset being influenced by word recognition. A comparison of simulation results with experimental data shows that the model provides a fruitful and parsimonious theoretical framework for understanding reading behavior

    Evidence for Letter-Specific Position Coding Mechanisms

    Get PDF
    International audienceThe perceptual matching (same-different judgment) paradigm was used to investigate precision in position coding for strings of letters, digits, and symbols. Reference and target stimuli were 6 characters long and could be identical or differ either by transposing two characters or substituting two characters. The distance separating the two characters was manipulated such that they could either be contiguous, separated by one intervening character, or separated by two intervening characters. Effects of type of character and distance were measured in terms of the difference between the transposition and substitution conditions (transposition cost). Error rates revealed that transposition costs were greater for letters than for digits, which in turn were greater than for symbols. Furthermore, letter stimuli showed a gradual decrease in transposition cost as the distance between the letters increased, whereas the only significant difference for digit and symbol stimuli arose between contiguous and non-contiguous changes, with no effect of distance on the non-contiguous changes. The results are taken as further evidence for letter-specific position coding mechanisms
    • …
    corecore