141 research outputs found

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Dynamics of Attention in Depth: Evidence from Mutli-Element Tracking

    Full text link
    The allocation of attention in depth is examined using a multi-element tracking paradigm. Observers are required to track a predefined subset of from two to eight elements in displays containing up to sixteen identical moving elements. We first show that depth cues, such as binocular disparity and occlusion through T-junctions, improve performance in a multi-element tracking task in the case where element boundaries are allowed to intersect in the depiction of motion in a single fronto-parallel plane. We also show that the allocation of attention across two perceptually distinguishable planar surfaces either fronto-parallel or receding at a slanting angle and defined by coplanar elements, is easier than allocation of attention within a single surface. The same result was not found when attention was required to be deployed across items of two color populations rather than of a single color. Our results suggest that, when surface information does not suffice to distinguish between targets and distractors that are embedded in these surfaces, division of attention across two surfaces aids in tracking moving targets.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Attention in Depth: Disparity and Occlusion Cues Facilitate Multi-Element Visual Tracking

    Full text link
    Human observers can track up to five moving targets in a display with ten identical elements (Pylyshyn and Storm, 1988; Yantis, 1992). Previous experiments manipulated element trajectories to prevent intersections of element boundaries, evidently in the belief that transient overlaps among homogeneous elements make the task too hard. We examine whether depth cues such as occlusion (T-junctions) and disparity affect performance in a tracking task when element boundaries, as projected onto the two-dimensional plane of the monitor screen, are allowed to intersect. Elements move smoothly in depth, as well as in horizontal and vertical position, throughout a 7-second tracking period. A probe is then flashed, and subjects report whether the flash occurred on a target or on a non-target. Overlapping circular objects form T-junctions when shaded to appear like spheres or figure eight regions when rendered as disks. Two factors, disparity and T-junctions, are considered. Results from eight naive observers show that performance improves for displays with depth information (T-junctions or disparity), suggesting that depth cues are useful for multi-element tracking.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657, N00014-94-1-0597, N00014-95-1-0409

    Motion Aftereffects Due to Interocular Summation of Adaptation to Linear Motion

    Full text link
    The motion aftereffect (MAE) can be elicited by adapting observers to global motion before they view a display containing no global motion. Experiments y others have shown that if the left eye of an observer is adapted to motion going in one direction, no MAE is reported during binocular testing. The present study investigated whether no binocular adaption had occured because the monocular motion signals cancelled each other during testing. Observers were adapted to different, but not quite opposite, directions of motion in the two eyes. Either both eyes, the left eye, ot the right eye were tested. observers reported the direction of perceived motion during the test. When they saw the test stimulus with both eyes, observers reported seeing motion in the opposite direction of the vectorial sum of the adaption directions. in the monocular test conditions observers reported MAW directions about halfway between their binocluar report and the direction opposite the corresponding monocular adaptaion directions, indicating that both monocular and binocular sites had adapted. A decomposition of the observed MAEs based on two strictly monocular and one binoclar representation of motion adaptation can account for the data.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334, F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE A303-21-93); Office of Naval Research (N00014-91-J-4100, N00014-94-1-0597

    Neural Dynamics of Motion Processing and Speed Discrimination

    Full text link
    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-tuned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the Vl→7 MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-94-1-0597, N00014-95-1-0409); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI-90-00530

    The Role of Edges and Line-Ends in Illusory Contour Formation

    Full text link
    Illusory contours can be induced along directions approximately collinear to edges or approximately perpendicular to the ends of lines. Using a rating scale procedure we explored the relation between the two types of inducers by systematically varying the thickness of inducing elements to result; in varying amounts of "edge-like" or "line-like" induction. Inducers for om illusory figures consisted of concentric rings with arcs missing. Observers judged the clarity and brightness of illusory figures as the number of arcs, their thicknesses, and spacings were parametrically varied. Degree of clarity and amount of induced brightness were both found to be inverted-U functions of the number of arcs. These results mandate that any valid model of illusory contour formation must account for interference effects between parallel lines or between those neural units responsible for completion of boundary signals in directions perpendicular to the ends of thin lines. Line width was found to have an effect on both clarity and brightness, a finding inconsistent with those models which employ only completion perpendicular to inducer orientation.Air Force Office of Scientific Research (F49620-92-J-0334, URI 90-0175, F49620-92-J-0334); National Science Foundation (Graduate Fellowship); Office of Naval Research (N00014-91-J-4100

    Perceived Texture Segregation in Chromatic Element-Arrangement Patterns: High Intensity Interference

    Full text link
    An element-arrangement pattern is composed of two types of elements that differ in the ways in which they are arranged in different regions of the pattern. We report experiments on the perceived segregation of chromatic element-arrangement patterns composed of equal-size red and blue squares as the luminances of the surround, the interspaces, and the background (surround plus interspaces) are varied. Perceived segregation was markedly reduced by increasing the luminance of the interspaces. Unlike achromatic element-arrangement patterns composed of squares differing in lightness (Beck, Graham, & Sutter, 1991), perceived segregation did not decrease when the luminance of the interspaces was below that of the squares. Perceived segregation was approximately constant for constant ratios of interspace luminance to square luminance and increased with the contrast ratio of the squares. Perceived segregation based on edge alignment was not interfered with by high intensity interspaces. Stereoscopic cues that caused the squares composing the element arrangement pattern to be seen in front of the interspaces did not greatly improve perceived segregation. One explanation of the results is in terms of inhibitory interactions among achromatic and chromatic cortical cells tuned to spatial-frequency and orientation. Alternately, the results may be explained in terms of how the luminance of the interspaces affects the grouping of the squares for encoding surface representations. Neither explanation accounts fully for the data and both mechanisms may be involved.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (A303-21-93); Office of Naval Research (N00014-91J-4100); CNPQ and NUTES/UFRJ, Brazi

    A Contrast- and Luminance-Driven Multiscale Netowrk Model of Brightness Perception

    Full text link
    A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous lumi- nance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring." Computer simulations illustrate the model's competencies.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE-A303-21-93); Office of Naval Research (N00014-91-J-4100); German BMFT grant (413-5839-01 1N 101 C/1); CNPq and NUTES/UFRJ, Brazi

    Does Optic Flow Explain the Firing of Grid Cells?

    Get PDF
    *Problem.* Various cues such as vestibular, sensorimotor, or visual information can lead to the firing of grid cells recorded in entorhinal cortex of rats. A recent model uses boundary vector cells to provide information about the 2D spatial position (Barry et al., Review Neuroscience, 17, 2006). However, boundary vector cells need to know the angle and distance of the boundary wall. In contrast we study the estimation of 2D velocity and change of heading of the rat from optic flow and if this information can lead to grid cell firing.
*Approach.* A simple circular cage is modeled as a 3D world and trajectories of a rat’s movement are simulated. Optic flow for a spherical camera model is calculated for regularly sampled locations on the ground of the cage. This flow information is used in a template model to estimate the rat’s 2D linear velocity and yaw rotational velocity. 2D linear velocities are integrated into the velocity controlled oscillator (VCO) model (Burgess, Hippocampus, 18, 2008) while spatial locations are taken from the original trajectory.
*Result and Conclusion.* If velocity estimates are temporally integrated over ~20min the error summation by path integration prevents generation of a clear grid cell firing pattern by the VCO model. However, for short durations velocity estimates and path integration are accurate. If we assume a reset mechanism that recalibrates the spatial location of the rat grid cell firing can be achieved. Different reset intervals were simulated and the grid score for the firing pattern was calculated. For a reset interval longer than one minute this grid score decreases rapidly. We conclude that grid cell firing is not generated only by optic flow, but that a recalibration of the spatial position using cues other than optic flow occurs at least every minute.
Supported by CELEST (NSF SMA-0835976)

    Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

    Full text link
    A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530
    • …
    corecore