202 research outputs found

    The influence of large scanning eye movements on stereoscopic slant estimation of large surfaces

    Get PDF
    The results of several experiments demonstrate that the estimated magnitude of perceived slant of large stereoscopic surfaces increases with the duration of the presentation. In these experiments subjects were free to make eye movements. A possible explanation for the increase is that the visual system needs to scan the stimulus with eye movements (which take time) before it can make a reliable estimate of slant. We investigated the influence of large scanning eye movements on stereoscopic slant estimation of large surfaces. Six subjects estimated the magnitude of slant about the vertical or horizontal axis induced by large-field stereograms of which one half-image was transformed by horizontal scale, horizontal shear, vertical scale, vertical shear, divergence or rotation relative to the other half-image. The experiment was blocked in three sessions. Each session was devoted to one of the following fixation strategies: central fixation, peripheral (20 deg) fixation and active scanning of the stimulus. The presentation duration in each of the sessions was 0.5, 2 or 8 sec. Estimations were done with and without a visual reference. The magnitudes of estimated slant and the perceptual biases were not significantly influenced by the three fixation strategies. Thus, our results provide no support for the hypothesis that the time used for the execution of large scanning eye movements explains the build-up of estimated slant with the duration of the stimulus presentation

    ARTSCENE: A Neural System for Natural Scene Classification

    Full text link
    How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, ranging from global gist to local properties of textures. The model can incrementally learn and predict scene identity by gist information alone and can improve performance through selective attention to scenic textures of progressively smaller size. ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain and countryside) with up to 91.58% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone. Model simulations also show that adjacent textures form higher-order features that are also informative for scene recognition.National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Simple Configuration Effects on Eye Movements in Horizontal Scanning Tasks

    Get PDF
    When reading text, observers alternate periods of stable gaze (fixations) and shifts of gaze (saccades). An important debate in the literature concerns the processes that drive the control of these eye movements. Past studies using strings of letters rather than meaningful text ('z-reading') suggest that eye movement control during reading is, to a large extent, controlled by low-level image properties. These studies, however, have failed to take into account perceptual grouping processes that could drive these low-level effects. We here study the role of various grouping factors in horizontal scanning eye movements, and compare these to reading meaningful text. The results show that sequential horizontal scanning of meaningless and visually distinctive stimuli is slower than for meaningful stimuli (e.g. letters instead of dots). Moreover, we found strong evidence for anticipatory processes in saccadic processing during horizontal scanning tasks. These results suggest a strong role of perceptual grouping in oculomotor control in reading

    Perceptual learning without feedback and the stability of stereoscopic slant estimation

    Get PDF
    Subjects were examined for practice effects in a stereoscopic slant estimation task involving surfaces that comprised a large portion of the visual field. In most subjects slant estimation was significantly affected by practice, but only when an isolated surface (an absolute disparity gradient) was present in the visual field. When a second, unslanted, surface was visible (providing a second disparity gradient and thereby also a relative disparity gradient) none of the subjects exhibited practice effects. Apparently, stereoscopic slant estimation is more robust or stable over time in the presence of a second surface than in its absence. In order to relate the practice effects, which occurred without feedback, to perceptual learning, results are interpreted within a cue interaction framework. In this paradigm the contribution of a cue depends on its reliability. It is suggested that normally absolute disparity gradients contribute relatively little to perceived slant and that subjects learn to increase this contribution by utilizing proprioceptive information. It is argued that---given the limited computational power of the brain---a relatively small contribution of absolute disparity gradients in perceived slant enhances the stability of stereoscopic slant perception

    General-purpose and special-purpose visual systems

    Get PDF
    The information that eyes supply supports a wide variety of functions, from the guidance systems that enable an animal to navigate successfully around the environment, to the detection and identification of predators, prey, and conspecifics. The eyes with which we are most familiar the single-chambered eyes of vertebrates and cephalopod molluscs, and the compound eyes of insects and higher crustaceans allow these animals to perform the full range of visual tasks. These eyes have evidently evolved in conjunction with brains that are capable of subjecting the raw visual information to many different kinds of analysis, depending on the nature of the task that the animal is engaged in. However, not all eyes evolved to provide such comprehensive information. For example, in bivalve molluscs we find eyes of very varied design (pinholes, concave mirrors, and apposition compound eyes) whose only function is to detect approaching predators and thereby allow the animal to protect itself by closing its shell. Thus, there are special-purpose eyes as well as eyes with multiple functions

    Information processing in the retina: computer model and some conclusions

    Get PDF

    Factors and processes in children's transitive deductions

    Get PDF
    Transitive tasks are important for understanding how children develop socio-cognitively. However, developmental research has been restricted largely to questions surrounding maturation. We asked 6-, 7- and 8-year-olds (N = 117) to solve a composite of five different transitive tasks. Tasks included conditions asking about item-C (associated with the marked relation) in addition to the usual case of asking only about item-A (associated with the unmarked relation). Here, children found resolving item-C much easier than resolving item-A, a finding running counter to long-standing assumptions about transitive reasoning. Considering gender perhaps for the first time, boys exhibited higher transitive scores than girls overall. Finally, analysing in the context of one recent and well-specified theory of spatial transitive reasoning, we generated the prediction that reporting the full series should be easier than deducing any one item from that series. This prediction was not upheld. We discuss amendments necessary to accommodate all our earlier findings

    Frontal Eye Field Neurons Assess Visual Stability Across Saccades

    Get PDF
    The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change

    Linking cortical visual processing to viewing behavior using fMRI

    Get PDF
    One characteristic of natural visual behavior in humans is the frequent shifting of eye position. It has been argued that the characteristics of these eye movements can be used to distinguish between distinct modes of visual processing (Unema et al., 2005). These viewing modes would be distinguishable on the basis of the eye-movement parameters fixation duration and saccade amplitude and have been hypothesized to reflect the differential involvement of dorsal and ventral systems in saccade planning and information processing. According to this hypothesis, on the one hand, while in a “pre-attentive” or ambient mode, primarily scanning eye movements are made; in this mode fixation are relatively brief and saccades tends to be relatively large. On the other hand, in “attentive” focal mode, fixations last longer and saccades are relatively small, and result in viewing behavior which could be described as detailed inspection. Thus far, no neuroscientific basis exists to support the idea that such distinct viewing modes are indeed linked to processing in distinct cortical regions. Here, we used fixation-based event-related (FIBER) fMRI in combination with independent component analysis (ICA) to investigate the neural correlates of these viewing modes. While we find robust eye-movement-related activations, our results do not support the theory that the above mentioned viewing modes modulate dorsal and ventral processing. Instead, further analyses revealed that eye-movement characteristics such as saccade amplitude and fixation duration did differentially modulate activity in three clusters in early, ventromedial and ventrolateral visual cortex. In summary, we conclude that evaluating viewing behavior is crucial for unraveling cortical processing in natural vision
    corecore