95 research outputs found

    The effect of linguistic and visual salience in visual world studies

    Get PDF
    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present. © 2014 Cavicchio, Melcher and Poesio

    Determinants of Dwell Time in Visual Search: Similarity or Perceptual Difficulty?

    Get PDF
    The present study examined the factors that determine the dwell times in a visual search task, that is, the duration the gaze remains fixated on an object. It has been suggested that an item’s similarity to the search target should be an important determiner of dwell times, because dwell times are taken to reflect the time needed to reject the item as a distractor, and such discriminations are supposed to be harder the more similar an item is to the search target. In line with this similarity view, a previous study shows that, in search for a target ring of thin line-width, dwell times on thin linewidth Landolt C’s distractors were longer than dwell times on Landolt C’s with thick or medium linewidth. However, dwell times may have been longer on thin Landolt C’s because the thin line-width made it harder to detect whether the stimuli had a gap or not. Thus, it is an open question whether dwell times on thin line-width distractors were longer because they were similar to the target or because the perceptual decision was more difficult. The present study de-coupled similarity from perceptual difficulty, by measuring dwell times on thin, medium and thick line-width distractors when the target had thin, medium or thick line-width. The results showed that dwell times were longer on target-similar than target-dissimilar stimuli across all target conditions and regardless of the line-width. It is concluded that prior findings of longer dwell times on thin linewidth-distractors can clearly be attributed to target similarity. As will be discussed towards the end, the finding of similarity effects on dwell times has important implications for current theories of visual search and eye movement control

    We can guide search by a set of colours, but are reluctant to do it.

    Get PDF
    For some real-world color searches, the target colours are not precisely known, and any item within a range of color values should be attended. This, a target representation that captures multiple similar colours would be advantageous. If such multicolour search is possible, then search for two targets (e..g Stroud, Menneer, Cave and Donnelly, 2012) might be guided by a target representation that included the target colours as well as the continuum of colours that fall between the targets within a contiguous region of color space. Results from Stroud et al (2012) suggest otherwise, however. The current set of experiments show that guidance for a set of colours that are from a single region of color space can be effective if targets are depicted as specific discrete colours. Specifically, Experiments 1-3 demonstrate that a search can be guided by four and even eight colours given the appropriate conditions. However, Experiment 5 gives evidence that guidance is sometimes sensitive to how informative the target preview is to search. Experiments 6 and 7 show that a stimulus showing a continuous range of target colours is not translated into a search target representation. Thus, search can be guided by multiple discrete colours that are from a single region in color space, but this approach was not adopted in a search for two targets with intervening distractor colours

    Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation

    Full text link
    We present a computational model and algorithm for detecting diffuse and specular interface reflections and some inter-reflections. Our color reflection model is based on the dichromatic model for dielectric materials and on a color space, called S space, formed with three orthogonal basis functions. We transform color pixels measured in RGB into the S space and analyze color variations on objects in terms of brightness, hue and saturation which are defined in the S space. When transforming the original RGB data into the S space, we discount the scene illumination color that is estimated using a white reference plate as an active probe. As a result, the color image appears as if the scene illumination is white. Under the whitened illumination, the interface reflection clusters in the S space are all aligned with the brightness direction. The brightness, hue and saturation values exhibit a more direct correspondence to body colors and to diffuse and specular interface reflections, shading, shadows and inter-reflections than the RGB coordinates. We exploit these relationships to segment the color image, and to separate specular and diffuse interface reflections and some inter-reflections from body reflections. The proposed algorithm is effications for uniformly colored dielectric surfaces under singly colored scene illumination. Experimental results conform to our model and algorithm within the liminations discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41303/1/11263_2004_Article_BF00128233.pd

    The colors seen behind transparent filters

    No full text
    How do the colors and lightnesses of surfaces seen to lie behind a transparent filter depend on the chromatic properties of the filter? A convergence model developed in prior work (D'Zmura et al, 1997 Perception 26 471 - 492; Chen and D'Zmura, 1998 Perception 27 595 - 608) suggests that the visual system interprets a filter's transformation of color in terms of a convergence in color space. Such a convergence is described by a color shift and a change in contrast. We tested the model using an asymmetric matching task. Observers adjusted, in computer graphic simulation, the color of a surface seen behind a transparent filter in order to match the color of a surface seen in plain view. The convergence model fits the color-matching results nearly as well as a more general affine-transformation model, even though the latter has many more parameters. Other models, including von Kries scaling, did not perform as well. These results suggest that the color constancy revealed in this task is described best by a model that takes into account both color shifts and changes in contrast

    Colour and lightness of a surface seen behind a transparent filter

    No full text
    We measured how the colour and lightness of a surface seen to lie behind a transparent filter depend on filter properties. A convergence model suggests that a filter's transformation of chromatic information from underlying surfaces is interpreted as a convergence in colour space (D'Zmura, Colantoni, Knoblauch, and Laget, 1997 Perception 26 471 - 492). Such a convergence is described by a transparency parameter alpha and by a colour that acts as the centre of convergence. We used an asymmetric matching task to test the model. In computer-graphic simulation, observers adjusted the colour of a surface seen behind a transparent colour filter in order to match the colour of a surface seen in plain view. We varied the lightness and chromatic properties of both the surface to be matched and the transparent filter. We found that the convergence model fitted the matching data nearly as well as a more general affine transformation model, even though the latter has many more parameters (twelve) than the former (four). Linear transformation, translation, and Von Kries scaling models all performed poorly. The convergence model of transparency is a general model of colour constancy. It can account for shifts in colour, such as those caused by changing the spectral properties of illumination, and can also account for shifts in contrast, like those caused by fog or by change in the spatial distribution of illumination
    • …
    corecore