180 research outputs found

    Dynamics of Attention in Depth: Evidence from Mutli-Element Tracking

    Full text link
    The allocation of attention in depth is examined using a multi-element tracking paradigm. Observers are required to track a predefined subset of from two to eight elements in displays containing up to sixteen identical moving elements. We first show that depth cues, such as binocular disparity and occlusion through T-junctions, improve performance in a multi-element tracking task in the case where element boundaries are allowed to intersect in the depiction of motion in a single fronto-parallel plane. We also show that the allocation of attention across two perceptually distinguishable planar surfaces either fronto-parallel or receding at a slanting angle and defined by coplanar elements, is easier than allocation of attention within a single surface. The same result was not found when attention was required to be deployed across items of two color populations rather than of a single color. Our results suggest that, when surface information does not suffice to distinguish between targets and distractors that are embedded in these surfaces, division of attention across two surfaces aids in tracking moving targets.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Attention in Depth: Disparity and Occlusion Cues Facilitate Multi-Element Visual Tracking

    Full text link
    Human observers can track up to five moving targets in a display with ten identical elements (Pylyshyn and Storm, 1988; Yantis, 1992). Previous experiments manipulated element trajectories to prevent intersections of element boundaries, evidently in the belief that transient overlaps among homogeneous elements make the task too hard. We examine whether depth cues such as occlusion (T-junctions) and disparity affect performance in a tracking task when element boundaries, as projected onto the two-dimensional plane of the monitor screen, are allowed to intersect. Elements move smoothly in depth, as well as in horizontal and vertical position, throughout a 7-second tracking period. A probe is then flashed, and subjects report whether the flash occurred on a target or on a non-target. Overlapping circular objects form T-junctions when shaded to appear like spheres or figure eight regions when rendered as disks. Two factors, disparity and T-junctions, are considered. Results from eight naive observers show that performance improves for displays with depth information (T-junctions or disparity), suggesting that depth cues are useful for multi-element tracking.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657, N00014-94-1-0597, N00014-95-1-0409

    Motion Aftereffects Due to Interocular Summation of Adaptation to Linear Motion

    Full text link
    The motion aftereffect (MAE) can be elicited by adapting observers to global motion before they view a display containing no global motion. Experiments y others have shown that if the left eye of an observer is adapted to motion going in one direction, no MAE is reported during binocular testing. The present study investigated whether no binocular adaption had occured because the monocular motion signals cancelled each other during testing. Observers were adapted to different, but not quite opposite, directions of motion in the two eyes. Either both eyes, the left eye, ot the right eye were tested. observers reported the direction of perceived motion during the test. When they saw the test stimulus with both eyes, observers reported seeing motion in the opposite direction of the vectorial sum of the adaption directions. in the monocular test conditions observers reported MAW directions about halfway between their binocluar report and the direction opposite the corresponding monocular adaptaion directions, indicating that both monocular and binocular sites had adapted. A decomposition of the observed MAEs based on two strictly monocular and one binoclar representation of motion adaptation can account for the data.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334, F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE A303-21-93); Office of Naval Research (N00014-91-J-4100, N00014-94-1-0597

    The Role of Edges and Line-Ends in Illusory Contour Formation

    Full text link
    Illusory contours can be induced along directions approximately collinear to edges or approximately perpendicular to the ends of lines. Using a rating scale procedure we explored the relation between the two types of inducers by systematically varying the thickness of inducing elements to result; in varying amounts of "edge-like" or "line-like" induction. Inducers for om illusory figures consisted of concentric rings with arcs missing. Observers judged the clarity and brightness of illusory figures as the number of arcs, their thicknesses, and spacings were parametrically varied. Degree of clarity and amount of induced brightness were both found to be inverted-U functions of the number of arcs. These results mandate that any valid model of illusory contour formation must account for interference effects between parallel lines or between those neural units responsible for completion of boundary signals in directions perpendicular to the ends of thin lines. Line width was found to have an effect on both clarity and brightness, a finding inconsistent with those models which employ only completion perpendicular to inducer orientation.Air Force Office of Scientific Research (F49620-92-J-0334, URI 90-0175, F49620-92-J-0334); National Science Foundation (Graduate Fellowship); Office of Naval Research (N00014-91-J-4100

    A Contrast- and Luminance-Driven Multiscale Netowrk Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous lumi- nance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring." Computer simulations illustrate the model's competencies.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 1N 101 C/1); CNPq and NUTES/UFRJ, Brazi

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

    Full text link
    A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530

    Visual Cortical Mechanisms of Perceptual Grouping: Interacting Layers, Networks, Columns, and Maps

    Full text link
    The visual cortex has a laminar organintion whose circuits form functional columns in cortical maps. How this laminar architecture supports visual percepts is not well understood. A neural model proposes how the laminar circuits of Vl and V2 generate perceptual groupings that maintain sensitivity to the contrasts and spatial organization of scenic cues.The model can decisively choose which groupings cohere and survive, even while balanced excitatory and inhibitory interactions preserve contrast-sensitive measures of local boundary likelihood or strength. In the model, excitatory inputs from LGN activate layers 4 and 6 of Vl. Layer 6 activates an on-center off-surround network of inputs to layer 4. Together these layer 4 inputs preserve analog sensitivity to LGN input contrast. Layer 4 cells excite pyramidal cells in layer 2/3 which activate monosynaptic long-range horizontal excitory connections between layer 2/3 pyramidal cells, and short-range disynaptic inhibitory connections mediated by smooth stellate cells. These interactions support inward perceptual grouping between two or more boundary inducetd, but not outward grouping from a single inducer. These lJO\UHlary signals feed back to layer 4 via the layer 6-to-4 on-center off-surround network. This folded feecdback joind cells in different layers into functional columnns while selecting winning groupings. Layer G in V1 also sends top-dlown signals to LGN using an on-center off-surround network, which suppresses LGN cells that do not receive feedback, while selecting, enhaneing, and synchronizing activity of those that do. The model is used to simulate psychophysical and neurophysiological data about perceptual grouping, including various Gestalt grouping laws.Air Force Office of Scientific Research (D0-0175), British Petroleum (BP 89A-1204); Defense Advanced Research Projects Agency and Office of Naval Research (N00014-92-J-4015); HNC Software (SC9-4-001), National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-1-91-4100, N00014-95-1-0409, 0NR N00014-9510657

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624
    corecore