4,043 research outputs found

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio

    A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials

    Get PDF
    Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts

    Children and older adults exhibit distinct sub-optimal cost-benefit functions when preparing to move their eyes and hands

    Get PDF
    "© 2015 Gonzalez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited"Numerous activities require an individual to respond quickly to the correct stimulus. The provision of advance information allows response priming but heightened responses can cause errors (responding too early or reacting to the wrong stimulus). Thus, a balance is required between the online cognitive mechanisms (inhibitory and anticipatory) used to prepare and execute a motor response at the appropriate time. We investigated the use of advance information in 71 participants across four different age groups: (i) children, (ii) young adults, (iii) middle-aged adults, and (iv) older adults. We implemented 'cued' and 'non-cued' conditions to assess age-related changes in saccadic and touch responses to targets in three movement conditions: (a) Eyes only; (b) Hands only; (c) Eyes and Hand. Children made less saccade errors compared to young adults, but they also exhibited longer response times in cued versus non-cued conditions. In contrast, older adults showed faster responses in cued conditions but exhibited more errors. The results indicate that young adults (18 -25 years) achieve an optimal balance between anticipation and execution. In contrast, children show benefits (few errors) and costs (slow responses) of good inhibition when preparing a motor response based on advance information; whilst older adults show the benefits and costs associated with a prospective response strategy (i.e., good anticipation)

    Temporal characteristics of the influence of punishment on perceptual decision making in the human brain

    Get PDF
    Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing

    Human scalp potentials reflect a mixture of decision-related signals during perceptual choices

    Get PDF
    Single-unit animal studies have consistently reported decision-related activity mirroring a process of temporal accumulation of sensory evidence to a fixed internal decision boundary. To date, our understanding of how response patterns seen in single-unit data manifest themselves at the macroscopic level of brain activity obtained from human neuroimaging data remains limited. Here, we use single-trial analysis of human electroencephalography data to show that population responses on the scalp can capture choice-predictive activity that builds up gradually over time with a rate proportional to the amount of sensory evidence, consistent with the properties of a drift-diffusion-like process as characterized by computational modeling. Interestingly, at time of choice, scalp potentials continue to appear parametrically modulated by the amount of sensory evidence rather than converging to a fixed decision boundary as predicted by our model. We show that trial-to-trial fluctuations in these response-locked signals exert independent leverage on behavior compared with the rate of evidence accumulation earlier in the trial. These results suggest that in addition to accumulator signals, population responses on the scalp reflect the influence of other decision-related signals that continue to covary with the amount of evidence at time of choice

    How active perception and attractor dynamics shape perceptual categorization: A computational model

    Get PDF
    We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent–environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as ‘‘evidence’’ for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.Peer reviewe

    Differential neural mechanisms for early and late prediction error detection

    Get PDF
    Emerging evidence indicates that prediction, instantiated at different perceptual levels, facilitate visual processing and enable prompt and appropriate reactions. Until now, the mechanisms underlying the effect of predictive coding at different stages of visual processing have still remained unclear. Here, we aimed to investigate early and late processing of spatial prediction violation by performing combined recordings of saccadic eye movements and fast event-related fMRI during a continuous visual detection task. Psychophysical reverse correlation analysis revealed that the degree of mismatch between current perceptual input and prior expectations is mainly processed at late rather than early stage, which is instead responsible for fast but general prediction error detection. Furthermore, our results suggest that conscious late detection of deviant stimuli is elicited by the assessment of prediction error’s extent more than by prediction error per se. Functional MRI and functional connectivity data analyses indicated that higher-level brain systems interactions modulate conscious detection of prediction error through top-down processes for the analysis of its representational content, and possibly regulate subsequent adaptation of predictivemodels. Overall, our experimental paradigm allowed to dissect explicit from implicit behavioral and neural responses to deviant stimuli in terms of their reliance on predictive models
    • …
    corecore