93,583 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Dynamics of Attention in Depth: Evidence from Mutli-Element Tracking

    Full text link
    The allocation of attention in depth is examined using a multi-element tracking paradigm. Observers are required to track a predefined subset of from two to eight elements in displays containing up to sixteen identical moving elements. We first show that depth cues, such as binocular disparity and occlusion through T-junctions, improve performance in a multi-element tracking task in the case where element boundaries are allowed to intersect in the depiction of motion in a single fronto-parallel plane. We also show that the allocation of attention across two perceptually distinguishable planar surfaces either fronto-parallel or receding at a slanting angle and defined by coplanar elements, is easier than allocation of attention within a single surface. The same result was not found when attention was required to be deployed across items of two color populations rather than of a single color. Our results suggest that, when surface information does not suffice to distinguish between targets and distractors that are embedded in these surfaces, division of attention across two surfaces aids in tracking moving targets.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Dissociating the effect of disruptive colouration on localisation and identification of camouflaged targets

    Get PDF
    Disruptive camouflage features contrasting areas of pigmentation across the animals’ surface that form false edges which disguise the shape of the body and impede detection. In many taxa these false edges feature local contrast enhancement or edge enhancement, light areas have lighter edges and dark areas have darker edges. This additional quality is often overlooked in existing research. Here we ask whether disruptive camouflage can have benefits above and beyond concealing location. Using a novel paradigm, we dissociate the time courses of localisation and identification of a target in a single experiment. We measured the display times required for a stimulus to be located or identified (the critical duration). Targets featured either uniform, disruptive or edge enhanced disruptive colouration. Critical durations were longer for identifying targets with edge enhanced disruptive colouration camouflage even when presented against a contrasting background, such that all target types were located equally quickly. For the first time, we establish empirically that disruptive camouflage not only conceals location, but also disguises identity. This shows that this form of camouflage can be useful even when animals are not hidden. Our findings offer insights into how edge enhanced disruptive colouration undermines visual perception by disrupting object recognition

    A Neural Theory of Attentive Visual Search: Interactions of Boundary, Surface, Spatial, and Object Representations

    Full text link
    Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.Air Force Office of Scientific Research (F49620-92-J-0499, 90-0175, F49620-92-J-0334); Advanced Research Projects Agency (AFOSR 90-0083, ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100); Northeast Consortium for Engineering Education (NCEE/A303/21-93 Task 0021); British Petroleum (89-A-1204); National Science Foundation (NSF IRI-90-00530

    Comparing Segmentation by Time and by Motion in Visual Search: An fMRI Investigation

    Get PDF
    Abstract Brain activity was recorded while participants engaged in a difficult visual search task for a target defined by the spatial configuration of its component elements. The search displays were segmented by time (a preview then a search display), by motion, or were unsegmented. A preparatory network showed activity to the preview display, in the time but not in the motion segmentation condition. A region of the precuneus showed (i) higher activation when displays were segmented by time or by motion, and (ii) correlated activity with larger segmentation benefits behaviorally, regardless of the cue. Additionally, the results revealed that success in temporal segmentation was correlated with reduced activation in early visual areas, including V1. The results depict partially overlapping brain networks for segmentation in search by time and motion, with both cue-independent and cue-specific mechanisms.</jats:p

    Cortical Dynamics of Boundary Segmentation and Reset: Persistence, Afterimages, and Residual Traces

    Full text link
    Using a neural network model of boundary segmentation and reset, Francis, Grossberg, and Mingolla (1994) linked the percept of persistence to the duration of a boundary segmentation after stimulus offset. In particular, the model simulated the decrease of persistence duration with an increase in stimulus duration and luminance. Thc present article reveals further evidence for the neural mechanisms used by the theory. Simulations show that the model reset signals generate orientational afterimages, such as the MacKay effect, when the reset signals can be grouped by a subsequent boundary segmentation that generates illusory contours through them. Simulations also show that the same mechanisms explain properties of residual traces, which increase in duration with stimulus duration and luminance. The model hereby discloses previously unsuspected mechanistic links between data about persistence and afterimages, and helps to clarify the sometimes controversial issues surrounding distinctions between persistence, residual traces, and afterimages.Air Force Office of Scientific Research (F49620-92-J-0499); Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015

    Visual search for size is influenced by a background texture gradient.

    Get PDF

    Probing the time course of facilitation and inhibition in gaze cueing of attention in an upper-limb reaching task

    Get PDF
    Previous work has revealed that social cues, such as gaze and pointed fingers, can lead to a shift in the focus of another person’s attention. Research investigating the mechanisms of these shifts of attention has typically employed detection or localization button-pressing tasks. Because in-depth analyses of the spatiotemporal characteristics of aiming movements can provide additional insights into the dynamics of the processing of stimuli, in the present study we used a reaching paradigm to further explore the processing of social cues. In Experiments 1 and 2, participants aimed to a left or right location after a nonpredictive eye gaze cue toward one of these target locations. Seven stimulus onset asynchronies (SOAs), from 100 to 2,400 ms, were used. Both the temporal (reaction time, RT) and spatial (initial movement angle, IMA) characteristics of the movements were analyzed. RTs were shorter for cued (gazed-at) than for uncued targets across most SOAs. There were, however, no statistical differences in IMAs between movements to cued and uncued targets, suggesting that action planning was not affected by the gaze cue. In Experiment 3, the social cue was a finger pointing to one of the two target locations. Finger-pointing cues generated significant cueing effects in both RTs and IMAs. Overall, these results indicate that eye gaze and finger-pointing social cues are processed differently. Perception–action coupling (i.e., a tight link between the response and the social cue that is presented) might play roles in both the generation of action and the deviation of trajectories toward cued and uncued targets

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search

    Get PDF
    Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
    • …
    corecore