309,529 research outputs found
Longer fixation duration while viewing face images
The spatio-temporal properties of saccadic eye movements can be influenced by the cognitive demand and the characteristics of the observed scene. Probably due to its crucial role in social communication, it is argued that face perception may involve different cognitive processes compared with non-face object or scene perception. In this study, we investigated whether and how face and natural scene images can influence the patterns of visuomotor activity. We recorded monkeys’ saccadic eye movements as they freely viewed monkey face and natural scene images. The face and natural scene images attracted similar number of fixations, but viewing of faces was accompanied by longer fixations compared with natural scenes. These longer fixations were dependent on the context of facial features. The duration of fixations directed at facial contours decreased when the face images were scrambled, and increased at the later stage of normal face viewing. The results suggest that face and natural scene images can generate different patterns of visuomotor activity. The extra fixation duration on faces may be correlated with the detailed analysis of facial features
Influence of initial fixation position in scene viewing
During scene perception our eyes generate complex sequences of fixations.
Predictors of fixation locations are bottom-up factors like luminance contrast,
top-down factors like viewing instruction, and systematic biases like the
tendency to place fixations near the center of an image. However, comparatively
little is known about the dynamics of scanpaths after experimental manipulation
of specific fixation locations. Here we investigate the influence of initial
fixation position on subsequent eye-movement behavior on an image. We presented
64 colored photographs to participants who started their scanpaths from one of
two experimentally controlled positions in the right or left part of an image.
Additionally, we computed the images' saliency maps and classified them as
balanced images or images with high saliency values on either the left or right
side of a picture. As a result of the starting point manipulation, we found
long transients of mean fixation position and a tendency to overshoot to the
image side opposite to the starting position. Possible mechanisms for the
generation of this overshoot were investigated using numerical simulations of
statistical and dynamical models. We conclude that inhibitory tagging is a
viable mechanism for dynamical planning of scanpaths.Comment: 34 pages with 10 figures submitted to Vision Research. Reviews
Received on June 8th, 2016 (Minor Revision). Updated Version will be uploaded
within the year 201
Do gaze cues in complex scenes capture and direct the attention of high functioning adolescents with ASD? evidence from eye-tracking
Visual fixation patterns whilst viewing complex photographic scenes containing one person were studied in 24 high-functioning adolescents with Autism Spectrum Disorders (ASD) and 24 matched typically developing adolescents. Over two different scene presentation durations both groups spent a large, strikingly similar proportion of their viewing time fixating the person’s face. However, time-course analyses revealed differences between groups in priorities of attention to the region of the face containing the eyes. It was also noted that although individuals with ASD were rapidly cued by the gaze direction of the person in the scene, this was not followed by an immediate increase in total fixation duration at the location of gaze, which was the case for typically developing individuals
Summary of along-track data from the earth radiation budget satellite for several representative ocean regions
For several days in January and August 1985, the Earth Radiation Budget Satellite, a component of the Earth Radiation Budget Experiment (ERBE), was operated in an along-track scanning mode. A survey of radiance measurements taken in this mode is given for five ocean regions: the north and south Atlantic, the Arabian Sea, the western Pacific north of the Equator, and part of the Intertropical Convergence Zone. Each overflight contains information about the clear scene and three cloud categories: partly cloudy, mostly cloudy, and overcast. The data presented include the variation of longwave and shortwave radiance in each scene classification as a function of viewing zenity angle during each overflight of one of the five target regions. Several features of interest in the development of anisotropic models are evident, including the azimuthal dependence of shortwave radiance that is an essential feature of shortwave bidirectional models. The data also demonstrate that the scene classification algorithm employed by the ERBE results in scene classifications that are a function of viewing geometry
Computational algorithms for increased control of depth-viewing volume for stereo three-dimensional graphic displays
Three-dimensional pictorial displays incorporating depth cues by means of stereopsis offer a potential means of presenting information in a natural way to enhance situational awareness and improve operator performance. Conventional computational techniques rely on asymptotic projection transformations and symmetric clipping to produce the stereo display. Implementation of two new computational techniques, as asymmetric clipping algorithm and piecewise linear projection transformation, provides the display designer with more control and better utilization of the effective depth-viewing volume to allow full exploitation of stereopsis cuing. Asymmetric clipping increases the perceived field of view (FOV) for the stereopsis region. The total horizontal FOV provided by the asymmetric clipping algorithm is greater throughout the scene viewing envelope than that of the symmetric algorithm. The new piecewise linear projection transformation allows the designer to creatively partition the depth-viewing volume, with freedom to place depth cuing at the various scene distances at which emphasis is desired
The Modelling of Stereoscopic 3D Scene Acquisition
The main goal of this work is to find a suitable method for calculating the best setting of a stereo pair of cameras that are viewing the scene to enable spatial imaging. The method is based on a geometric model of a stereo pair cameras currently used for the acquisition of 3D scenes. Based on selectable camera parameters and object positions in the scene, the resultant model allows calculating the parameters of the stereo pair of images that influence the quality of spatial imaging. For the purpose of presenting the properties of the model of a simple 3D scene, an interactive application was created that allows, in addition to setting the cameras and scene parameters and displaying the calculated parameters, also displaying the modelled scene using perspective views and the stereo pair modelled with the aid of anaglyphic images. The resulting modelling method can be used in practice to determine appropriate parameters of the camera configuration based on the known arrangement of the objects in the scene. Analogously, it can, for a given camera configuration, determine appropriate geometrical limits of arranging the objects in the scene being displayed. This method ensures that the resulting stereoscopic recording will be of good quality and observer-friendly
- …
