4,762 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway

    Get PDF
    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions

    Sensitivity to Timing and Order in Human Visual Cortex

    Get PDF
    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the how the brain encodes visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences between parts as small as 17 ms. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. This sensitivity to the order of stimulus presentation provides evidence that the brain may use differences in relative timing as a means of representing information.Comment: 10 figures, 1 tabl

    Transsaccadic Representation of Layout: What Is the Time Course of Boundary Extension?

    Get PDF
    How rapidly does boundary extension occur? Across experiments, trials included a 3-scene sequence (325 ms/picture), masked interval, and repetition of 1 scene. The repetition was the same view or differed (more close-up or wide angle). Observers rated the repetition as same as, closer than, or more wide angle than the original view on a 5-point scale. Masked intervals were 100, 250, 625, or 1,000 ms in Experiment 1 and 42, 100, or 250 ms in Experiments 2 and 3. Boundary extension occurred in all cases: Identical views were rated as too “close-up,” and distractor views elicited the rating asymmetry typical of boundary extension (wider angle distractors were rated as being more similar to the original than were closer up distractors). Most important, boundary extension was evident when only a 42-ms mask separated the original and test views. Experiments 1 and 3 included conditions eliciting a gaze shift prior to the rating test; this did not eliminate boundary extension. Results show that boundary extension is available soon enough and is robust enough to play an on-line role in view integration, perhaps supporting incorporation of views within a larger spatial framework

    Visual similarity in masking and priming: The critical role of task relevance

    Get PDF
    Cognitive scientists use rapid image sequences to study both the emergence of conscious perception (visual masking) and the unconscious processes involved in response preparation (masked priming). The present study asked two questions: (1) Does image similarity influence masking and priming in the same way? (2) Are similarity effects in both tasks governed by the extent of feature overlap in the images or only by task-relevant features? Participants in Experiment 1 classified human faces using a single dimension even though the faces varied in three dimensions (emotion, race, sex). Abstract geometric shapes and colors were tested in the same way in Experiment 2. Results showed that similarity reduced the visibility of the target in the masking task and increased response speed in the priming task, pointing to a double-dissociation between the two tasks. Results also showed that only task-relevant (not objective) similarity influenced masking and priming, implying that both tasks are influenced from the beginning by intentions of the participant. These findings are interpreted within the framework of a reentrant theory of visual perception. They imply that intentions can influence object formation prior to the separation of vision for perception and vision for action

    Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams

    Get PDF
    The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain

    Working memory encoding delays top-down attention to visual cortex

    Get PDF
    The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. Cognitive Psychology, 36, 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to interfere with the deployment of top-down attention [de Fockert, J. W., Rees, G., Frith, C.D., &Lavie, N. The role ofworking memory in visual selective attention. Science, 291, 1803-1806, 2001, doi:10.1126/science.1056496]. It is, therefore, possible that, in addition to delaying central processes, the engagement of working memory encoding (WME) also postpones perceptual processing as well. Here, we tested this hypothesis with time-resolved fMRI by assessing whether WME serially postpones the action of top-down attention on low-level sensory signals. In three experiments, participants viewed a skeletal rapid serial visual presentation sequence that contained two target items (T1 and T2) separated by either a short (550 msec) or long (1450 msec) SOA. During single-target runs, participants attended and responded only to T1, whereas in dual-target runs, participants attended and responded to both targets. To determine whether T1 processing delayed top-down attentional enhancement of T2, we examined T2 BOLD response in visual cortex by subtracting the single-task waveforms from the dualtask waveforms for each SOA. When the WME demands of T1 were high (Experiments 1 and 3), T2 BOLD response was delayed at the short SOA relative to the long SOA. This was not the case when T1 encoding demands were low (Experiment 2). We conclude that encoding of a stimulus into working memory delays the deployment of attention to subsequent target representations in visual cortex
    • 

    corecore