31 research outputs found

    Differences in illumination estimation in #thedress

    Get PDF
    We investigated whether people who report different colors for #thedress do so because they have different assumptions about the illumination in #thedress scene. We introduced a spherical illumination probe (Koenderink, Pont, van Doorn, Kappers, & Todd, 2007) into the original photograph, placed in fore-, or background of the scene and-for each location-let observers manipulate the probe's chromaticity, intensity and the direction of the illumination. Their task was to adjust the probe such that it would appear as a white sphere in the scene. When the probe was located in the foreground, observers who reported the dress to be white (white perceivers) tended to produce bluer adjustments than observers who reported it as blue (blue perceivers). Blue perceivers tended to perceive the illumination as less chromatic. There were no differences in chromaticity settings between perceiver types for the probe placed in the background. Perceiver types also did not differ in their illumination intensity and direction estimates across probe locations. These results provide direct support for the idea that the ambiguity in the perceived color of the dress can be explained by the different assumptions that people have about the illumination chromaticity in the foreground of the scene. In a second experiment we explore the possibility that blue perceivers might overall be less sensitive to contextual cues, and measure white and blue perceivers' dress color matches and labels for manipulated versions of the original photo. Results indeed confirm that contextual cues predominantly affect white perceivers

    Contribution of Color Information in Visual Saliency Model for Videos

    No full text
    International audienceMuch research has been concerned with the contribution of the low level features of a visual scene to the deployment of visual attention. Bottom-up saliency models have been developed to predict the location of gaze according to these features. So far, color besides to brightness, contrast and motion is considered as one of the primary features in computing bottom-up saliency. However, its contribution in guiding eye movements when viewing natural scenes has been debated. We investigated the contribution of color information in a bottom-up visual saliency model. The model efficiency was tested using the experimental data obtained on 45 observers who were eye tracked while freely exploring a large data set of color and grayscale videos. The two datasets of recorded eye positions, for grayscale and color videos, were compared with a luminance-based saliency model. We incorporated chrominance information to the model. Results show that color information improves the performance of the saliency model in predicting eye positions

    The representation of material categories in the brain

    Get PDF
    Contains fulltext : 134591.pdf (publisher's version ) (Open Access)Using textures mapped onto virtual nonsense objects, it has recently been shown that early visual cortex plays an important role in processing material properties. Here, we examined brain activation to photographs of materials, consisting of wood, stone, metal and fabric surfaces. These photographs were close-ups in the sense that the materials filled the image. In the first experiment, observers categorized the material in each image (i.e., wood, stone, metal, or fabric), while in an fMRI-scanner. We predicted the assigned material category using the obtained voxel patterns using a linear classifier. Region-of-interest and whole-brain analyses demonstrated material coding in the early visual regions, with lower accuracies for more anterior regions. There was little evidence for material coding in other brain regions. In the second experiment, we used an adaptation paradigm to reveal additional brain areas involved in the perception of material categories. Participants viewed images of wood, stone, metal, and fabric, presented in blocks with images of either different material categories (no adaptation) or images of different samples from the same material category (material adaptation). To measure baseline activation, blocks with the same material sample were presented (baseline adaptation). Material adaptation effects were found mainly in the parahippocampal gyrus, in agreement with fMRI-studies of texture perception. Our findings suggest that the parahippocampal gyrus, early visual cortex, and possibly the supramarginal gyrus are involved in the perception of material categories, but in different ways. The different outcomes from the two studies are likely due to inherent differences between the two paradigms. A third experiment suggested, based on anatomical overlap between activations, that spatial frequency information is important for within-category material discrimination.12 p

    Control of binocular gaze in a high-precision manual task

    Get PDF
    We investigated the precision of binocular gaze control while observers performed a high-precision manual movement, which involved hitting a target hole in a plate with a hand-held needle. Binocular eye movements and the 3D-position of the needle tip were tracked. In general the observers oriented their gaze to the target before they reached it with the needle. The amplitude of microsaccades scaled with the distance of the needle tip. We did not find evidence for the coordination of version and vergence during microsaccades which could be expected if those movements displaced gaze between the needle and the target hole. In a control experiment observers executed small saccades between marks on a slanted plane. Even when the observers executed saccades as small as the microsaccades in the needle experiment, we observed a coordinated displacement of the point of gaze on the horizontal and depth axis. Our results show that the characteristics of eye movements such as the frequency and amplitude of microsaccades are adapted online to the task demands. However, a coordinated control of version and vergence in small saccades is only observed if a movement of gaze on a slanted trajectory is explicitly instructed

    Role of motor execution in the ocular tracking of self-generated movements

    No full text
    When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant\u2019s hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands

    Perceived numerosity is reduced in peripheral vision

    Get PDF
    In four experiments we investigated the perception of numerosity in the peripheral visual field. We found that the perceived numerosity of a peripheral cloud of dots was judged to be inferior to the one of a central cloud of dots, particularly when the dots were highly clustered. Blurring the stimuli accordingly to peripheral spatial frequency sensitivity did not abolish the effect and had little impact on numerosity judgments. In a dedicated control experiment we ruled out that the reduction in peripheral perceived numerosity is secondary to a reduction of perceived stimulus size. We suggest that visual crowding might be at the origin of the observed reduction in peripheral perceived numerosity, implying that numerosity could be partly estimated through the individuation of the elements populating the array. \ua9 2013 ARVO

    Healthy aging is associated with decreased risk-taking in motor decision-making

    No full text
    Healthy aging is associated with changes in both cognitive abilities, including decision-making, and motor control. Previous research has shown that young healthy observers are close to optimal when they perform a motor equivalent of economic decision-making tasks that are known to produce suboptimal decision patterns. We tested both younger (age 20-29) and older (age 60-79) adults in such a task, which involved rapid manual aiming and monetary rewards and punishments contingent on hitting different areas on a touch screen. Older adults were as close to optimal as younger adults at the task, but differed from the younger adults in their strategy. Older adults appeared to be relatively less risk-seeking, as evidenced by the fact that they adjusted their aiming strategy to a larger extent to avoid the penalty area. A model-based interpretation of the results suggested that the change in aiming strategy between younger and older adults was mainly driven by the fact that the first weighted monetary gains more than losses, rather than by a mis-estimation of one's motor variability. The results parallel the general finding that older adults tend to be less risk-seeking than younger adults in economic decision-making and complement the observation that children are even more risk-seeking than younger adults in a similar motor decision-making paradigm. (PsycINFO Database Recor

    Helligkeits- und Farbwahrnehmung

    No full text
    corecore