55 research outputs found

    What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues

    Get PDF
    For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes

    Visual Performance Fields: Frames of Reference

    Get PDF
    Performance in most visual discrimination tasks is better along the horizontal than the vertical meridian (Horizontal-Vertical Anisotropy, HVA), and along the lower than the upper vertical meridian (Vertical Meridian Asymmetry, VMA), with intermediate performance at intercardinal locations. As these inhomogeneities are prevalent throughout visual tasks, it is important to understand the perceptual consequences of dissociating spatial reference frames. In all studies of performance fields so far, allocentric environmental references and egocentric observer reference frames were aligned. Here we quantified the effects of manipulating head-centric and retinotopic coordinates on the shape of visual performance fields. When observers viewed briefly presented radial arrays of Gabors and discriminated the tilt of a target relative to homogeneously oriented distractors, performance fields shifted with head tilt (Experiment 1), and fixation (Experiment 2). These results show that performance fields shift in-line with egocentric referents, corresponding to the retinal location of the stimulus

    Lack of color integration in visual short-term memory binding

    Get PDF
    Bicolored objects are retained in visual short-term memory (VSTM) less efficiently than unicolored objects. This is unlike shape-color combinations, whose retention in VSTM does not differ from that observed for shapes only. It is debated whether this is due to a lack of color integration and whether this may reflect the function of separate memory mechanisms. Participants judged whether the colors of bicolored objects (each with an external and an internalcolor) were the same or different across two consecutive screens. Colors had to be remembered either individually or in combination. In Experiment 1, external colors in the combined colors condition were remembered better than the internal colors, and performance for both was worse than that in the individual colors condition. The lack of color integration observed in Experiment 1 was further supported by a reduced capacity of VSTM to retain color combinations, relative to individual colors (Experiment 2). An additional account was found in Experiment 3, which showed spared color-color binding in the presence of impaired shape-color binding in a brain-damaged patient, thus suggesting that these two memory mechanisms are different

    Incremental grouping of image elements in vision

    Get PDF
    One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention
    corecore