15,662 research outputs found
Spacing affects some but not all visual searches: Implications for theories of attention and crowding
We investigated the effect of varying interstimulus spacing on an upright among inverted face search and a red–green among green–red bisected disk search. Both tasks are classic examples of serial search; however, spacing affects them very differently: As spacing increased, face discrimination performance improved significantly, whereas performance on the bisected disks remained poor. (No effect of spacing was observed for either a red among green or an L among + search tasks, two classic examples of parallel search.) In a second experiment, we precued the target location so that attention was no longer a limiting factor: Both serial search tasks were now equally affected by spacing, a result we attribute to a more classical form of crowding. The observed spacing effect in visual search suggests that for certain tasks, serial search may result from local neuronal competition between target and distractors, soliciting attentional resources; in other cases, serial search must occur for another reason, for example, because an item-by-item, attention-mediated recognition must take place. We speculate that this distinction may be based on whether or not there exist neuronal populations tuned to the relevant target–distractor distinction, and we discuss the possible relations between this spacing effect in visual search and other forms of crowding
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Contour Integration Across Polarities and Spatial Gaps: From Local Contrast Filtering to Global Grouping
This article introduces an experimental paradigm to selectively probe the multiple levels of visual processing that influence the formation of object contours, perceptual boundaries, and illusory contours. The experiments test the assumption that, to integrate contour information across space and contrast sign, a spatially short-range filtering process that is sensitive to contrast polarity inputs to a spatially long-range grouping process that pools signals from opposite contrast polarities. The stimuli consisted of thin subthreshold lines, flashed upon gaps between collinear inducers which potentially enable the formation of illusory contours. The subthreshold lines were composed of one or more segments with opposite contrast polarities. The polarity nearest to the inducers was varied to differentially excite the short-range filtering process. The experimental results are consistent with neurophysiological evidence for cortical mechanisms of contour processing and with the Boundary Contour System model, which identifies the short-range filtering process with cortical simple cells, and the long-range grouping process with cortical bipole cells.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657); Centre National de la Recherche Scientifique (France) URA (1939
Top-down effects on early visual processing in humans: a predictive coding framework
An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding
Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011
Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction
A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530
Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction
Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields
Cortical Dynamics of Visual Motion Perception: Short-Range and Long Range Apparent Motion
This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (AFOSR-90-0083); Hughes Aircraft Company (S1-903136
A dynamic neural field model of temporal order judgments
Temporal ordering of events is biased, or influenced, by perceptual organization—figure–ground organization—and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target’s offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (PsycINFO Database Record (c) 2015 APA, all rights reserved
Neural Dynamics of Motion Processing and Speed Discrimination
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-tuned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the Vl→7 MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-94-1-0597, N00014-95-1-0409); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI-90-00530
- …