45 research outputs found

    Neuronal basis of visual perception and attention in visual and frontal cortex

    Get PDF
    Roelfsema, P.R. [Promotor

    spatiotemporal filtering and motion illusions

    Get PDF
    We are perplexed by Clarke et al.'s (2013) criticisms on our recent contribution to Journal of Vision (Pooresmaeili, Cicchini, Morrone, & Burr, 2012). Our group has long championed the idea that perceptual processing of information can be anchored in a dynamic coordinate system that need not correspond to the instantaneous retinal representation. Our recent evidence shows that temporal duration (Burr, Tozzi, & Morrone, 2007; Morrone, Cicchini, & Burr, 2010), orientation (Zimmermann, Morrone, Fink, & Burr, 2013), motion (Melcher & Morrone, 2003; Turi & Burr, 2012) and saccadic error-correction (Zimmermann, Burr, & Morrone, 2011) are all processed to some extent in spatiotopic coordinates. Imaging studies reinforce these studies (d'Avossa et al., 2007; Crespi et al., 2011). Much earlier, we showed that the processing of smoothly moving objects was not anchored in instantaneous, retinotopic coordinates, but in the reference frame given by the trajectory of motion. There is an effective interpolation along the trajectory, so temporal offsets in spatially collinear stimuli causes them to appear spatially offset, corresponding to the physical reality of stimuli moving over large regions of space, behind occluders (Burr, 1979; Burr & Ross, 1979). Our explanation for this surprising effect was that it could be a direct consequence of the spatiotemporal orientation of the impulsive response of motion detectors, providing the spatiotemporal reference frame needed to account for the interactions between time and space (Burr & Ross, 1986; Burr, Ross, & Morrone, 1986; Burr & Ross, 2004; Nishida, 2004). Recently, we have applied the concept of spatiotemporal oriented receptive fields to account for ''predictive remapping,'' the ''nonretinotopic'' effects that occur on each saccadic eye-movement (Burr & Morrone, 2010; Burr & Morrone, 2012; Cicchini, Binda, Burr, & Morrone, 2012). We were most impressed by the compelling demonstrations of Herzog's group, clearly showing that the reference frame of processing is not the instantaneous retinal position, but is flexible, depending not only on real physical motion, but on an illusory apparent motion where the stimuli do not actually move (Boi, Ogmen, Krummenacher, Otto, & Herzog, 2009). This seemed to us important, worthy of quantitative measurement and modeling, particularly to see whether these new effects may fall within the framework that so successfully explained previous demonstrations, such as spatiotemporal interpolation. It is reassuring that Clarke et al. (2013) confirm our results, albeit with some variability between subjects. But more importantly add a very nice result in showing that our simplified version of the ''litmus test'' can be enhanced by attending to the motion. This is an excellent point that we overlooked. The strength of this type of motion is well known to depend on attention (Cavanagh, 1992), and it is indeed interesting that the strength of motion-induced effects depends not only on the physical conditions, but on internal states such as attention. Perhaps attention may also provide the flexibility in choosing the most appropriate scale for analysis, which in this case would be lower, given that attention is diverted to the periphery. This would add strength to our model, and an idea worth following up

    Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    Get PDF
    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In ahumanfMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene

    Size Aftereffects Are Eliminated When Adaptor Stimuli Are Prevented from Reaching Awareness by Continuous Flash Suppression

    Get PDF
    Size aftereffects are a compelling perceptual phenomenon in which we perceive the size of a stimulus as being different than it actually is following a period of visual stimulation of an adapter stimulus with a different size. Here, we used continuous flash suppression (CFS) to determine if size aftereffects require a high-level appraisal of the adapter stimulus. The strength of size aftereffects was quantified following a 3-s exposure to perceptually visible and invisible adapters. Participants judged the size of a target that followed the adapter in comparison to a subsequent reference. Our experiments demonstrate that the adapter no longer influenced the perceived size of the subsequent target stimulus under CFS. We conclude that the perception of size aftereffects is prevented when CFS is used to suppress the conscious awarness of the adapting stimulus

    Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    Get PDF
    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas

    Incremental grouping of image elements in vision

    Get PDF
    One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention

    Receipt of reward leads to altered estimation of effort

    Get PDF
    Effort and reward jointly shape many human decisions. Errors in predicting the required effort needed for a task can lead to suboptimal behavior. Here, we show that effort estimations can be biased when retrospectively re-estimated following receipt of a rewarding outcome. These biases depend on the contingency between reward and task difficulty, and are stronger for highly contingent rewards. Strikingly the observed pattern accords with predictions from Bayesian cue integration, indicating humans deploy an adaptive and rational strategy to deal with inconsistencies between the efforts they expend and the ensuing rewards
    corecore