1,185 research outputs found

    A Comparative Study of Fixation Density Maps

    Get PDF
    International audienceFixation density maps (FDM) created from eye tracking experiments are widely used in image processing applications. The FDM are assumed to be reliable ground truths of human visual attention and as such one expects high similarity between FDM created in different laboratories. So far, no studies have analysed the degree of similarity between FDM from independent laboratories and the related impact on the applications. In this paper, we perform a thorough comparison of FDM from three independently conducted eye tracking experiments. We focus on the effect of presentation time and image content and evaluate the impact of the FDM differences on three applications: visual saliency modelling, image quality assessment, and image retargeting. It is shown that the FDM are very similar and that their impact on the applications is low. The individual experiment comparisons, however, are found to be significantly different, showing that inter-laboratory differences strongly depend on the experimental conditions of the laboratories. The FDM are publicly available to the research community

    GazeDPM: Early Integration of Gaze Information in Deformable Part Models

    Full text link
    An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error

    Visual attention in the real world

    Get PDF
    Humans typically direct their gaze and attention at locations important for the tasks they are engaged in. By measuring the direction of gaze, the relative importance of each location can be estimated which can reveal how cognitive processes choose where gaze is to be directed. For decades, this has been done in laboratory setups, which have the advantage of being well-controlled. Here, visual attention is studied in more life-like situations, which allows testing ecological validity of laboratory results and allows the use of real-life setups that are hard to mimic in a laboratory. All four studies in this thesis contribute to our understanding of visual attention and perception in more complex situations than are found in the traditional laboratory experiments. Bottom-up models of attention use the visual input to predict attention or even the direction of gaze. In such models the input image is analyzed for each of several features first. In the classic Saliency Map model, these features are color contrast, luminance contrast and orientation contrast. The “interestingness” of each location in the image is represented in a ‘conspicuity maps’, one for each feature. The Saliency Map model then combines these conspicuity maps by linear addition, and this additivity has recently been challenged. The alternative is to use the maxima across all conspicuity maps. In the first study, the features color contrast and luminance contrast were manipulated in photographs of natural scenes to test which of these mechanisms is the best predictor of human behavior. It was shown that a linear addition, as in the original model, matches human behavior best. As all the assumptions of the Saliency Map model on the processes preceding the linear addition of the conspicuity maps are based on physiological research, this result constrains future models in their mechanistic assumption. If models of visual attention are to have ecological validity, comparing visual attention in laboratory and real-world conditions is necessary, and this is done in the second study. In the first condition, eye movements and head-centered, first-person perspective movies were recorded while participants explored 15 real-world environments (“free exploration”). Clips from these movies were shown to participants in two laboratory tasks. First, the movies were replayed as they were recorded (“video replay”), and second, a shuffled selection of frames was shown for 1 second each (“1s frame replay”). Eye-movement recordings from all three conditions revealed that in comparison to 1s frame replay, the video replay condition was qualitatively more alike to the free exploration condition with respect to the distribution of gaze and the relationship between gaze and model saliency and was quantitatively better able to predict free exploration gaze. Furthermore, the onset of a new frame in 1s frame replay evoked a reorientation of gaze towards the center. That is, the event of presenting a stimulus in a laboratory setup affects attention in a way unlikely to occur in real life. In conclusion, video replay is a better model for real-world visual input. The hypothesis that walking on more irregular terrain requires visual attention to be directed at the path more was tested on a local street (“Hirschberg”) in the third study. Participants walked on both sides of this inclined street; a cobbled road and the immediately adjacent, irregular steps. The environment and instructions were kept constant. Gaze was directed at the path more when participants walked on the steps as compared to the road. This was accomplished by pointing both the head and the eyes lower on the steps than on the road, while only eye-in-head orientation was spread out along the vertical more on the steps, indicating more or large eye movements on the more irregular steps. These results confirm earlier findings that eye and head movements play distinct roles in directing gaze in real-world situations. Furthermore, they show that implicit tasks (not falling, in this case) affect visual attention as much as explicit tasks do. In the last study it is asked if actions affect perception. An ambiguous stimulus that is alternatively perceived as rotating clockwise or counterclockwise (the ‘percept’) was used. When participants had to rotate a manipulandum continuously in a pre-defined direction – either clockwise or counterclockwise – and reported their concurrent percept with a keyboard, percepts weren’t affected by movements. If participants had to use the manipulandum to indicate their percept – by rotating either congruently or incongruently with the percept – the movements did affect perception. This shows that ambiguity in visual input is resolved by relying on motor signals, but only when they are relevant for the task at hand. Either by using natural stimuli, by comparing behavior in the laboratory with behavior in the real world, by performing an experiment on the street, or by testing how two diverse but everyday sources of information are integrated, the faculty of vision was studied in more life like situations. The validity of some laboratory work has been examined and confirmed and some first steps in doing experiments in real-world situations have been made. Both seem to be promising approaches for future research

    Salience-based object prioritization during active viewing of naturalistic scenes in young and older adults

    Get PDF
    Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of salience to fixation selection in scenes. When analysing fixation guidance without recurrence to objects, visual salience predicted whether image patches were fixated or not. This effect was reduced for the elderly, replicating an earlier finding. When using objects as the unit of analysis, we found that highly salient objects were more frequently selected for fixation than objects with low visual salience. Interestingly, this effect was larger for older adults. We also analysed where viewers fixate within objects, once they are selected. A preferred viewing location close to the centre of the object was found for both age groups. The results support the view that objects are important units of saccadic selection. Reconciling the salience view with the object view, we suggest that visual salience contributes to prioritization among objects. Moreover, the data point towards an increasing relevance of object-bound information with increasing age

    Measuring gaze and pupil in the real world: object-based attention,3D eye tracking and applications

    Get PDF
    This dissertation contains studies on visual attention, as measured by gaze orientation, and the use of mobile eye-tracking and pupillometry in applications. It combines the development of methods for mobile eye-tracking (studies II and III) with experimental studies on gaze guidance and pupillary responses in patients (studies IV and VI) and healthy observers (studies I and V). Object based attention / Study I What is the main factor of fixation guidance in natural scenes? Low-level features or objects? We developed a fixation-predicting model, which regards preferred viewing locations (PVL) per object and combines these distributions over the entirety of existing objects in the scene. Object-based fixation predictions for natural scene viewing perform at par with the best early salience model, that are based on low-level features. However, when stimuli are manipulated so that low-level features and objects are dissociated, the greater prediction power of saliency models diminishes. Thus, we dare to claim, that highly developed saliency models implicitly obtain object-hood and that fixation selection is mainly influenced by objects and much less by low-level features. Consequently, attention guidance in natural scenes is object-based. 3D tracking / Study II The second study focussed on improving calibration procedures for eye-in-head positions with a mobile eye-tracker.We used a mobile eye-tracker prototype, the EyeSeeCam with a high video-oculography (VOG) sampling rate and the technical gadget to follow the users gaze direction instantaneously with a rotatable camera. For a better accuracy in eye-positioning, we explored a refinement in the implementation of the eye-in-head calibration that yields a measure for fixation distance, which led to a mobile eye-tracker 3D calibration. Additionally, by developing the analytical mechanics for parametrically reorienting the gaze-centred camera, the 3D calibration could be applied to reliably record gaze-centred videos. Such videos are suitable as stimuli for investigating gaze-behaviour during object manipulation or object recognition in real worlds point-of-view (PoV) perspective. In fact, the 3D calibration produces a higher accuracy in positioning the gaze-centred camera over the whole 3D visual range. Study III, eye-tracking methods With a further development on the EyeSeeCam we achieved to record gaze-in-world data, by superposing eye-in-head and head-in-world coordinates. This novel approach uses a combination of few absolute head-positions extracted manually from the PoV video and of relative head-shifts integrated over angular velocities and translational accelerations, both given by an inertia measurement unit (IMU) synchronized to the VOG data. Gaze-in-world data consist of room-referenced gaze directions and their origins within the environment. They easily allow to assign fixation targets by using a 3D model of the measuring environment – a strong rationalisation regarding fixation analysis. Applications Study III Daylight is an important perceptual factor for visual comfort, but can also create discomfort glare situations during office work, so we developed to measure its behavioural influences. We achieve to compare luminance distributions and fixations in a real-world setting, by also recording indoor luminance variations time-resolved using luminance maps of a scenery spanning over a 3pi sr. Luminance evaluations in the workplace environment yield a well controlled categorisation of different lighting conditions and a localisation as well as a brightness measure of glare sources.We used common tasks like reading, typing on a computer, a phone call and thinking about a subject. The 3D model gives the possibility to test for gaze distribution shifts in the presence of glare patches and for variations between lighting conditions. Here, a low contrast lighting condition with no sun inside and a high contrast lighting condition with direct sunlight inside were compared. When the participants are not engaged in any visually focused task and the presence of the task support is minimal, the dominant view directions are inclined towards the view outside the window under the low contrast lighting conditions, but this tendency is less apparent and sways more towards the inside of the room under the high contrast lighting condition. This result implicates an avoidance of glare sources in gaze behaviour. In a second more extensive series of experiments, the participants’ subjective assessments of the lighting conditions will be included. Thus, the influence of glare can be analysed in more detail and tested whether visual discomfort judgements are correlated in differences in gaze-behaviour. Study IV The advanced eye-tracker calibration found application in several following projects and included in this dissertation is an investigation with patients suffering either from idiopathic Parkinson’s disease or from progressive supranuclear palsy (PSP) syndrome. PSP’s key symptom is the decreased ability to carry out vertical saccades and thus the main diagnostic feature for differentiating between the two forms of Parkinson’s syndrome. By measuring ocular movements during a rapid (< 20s) procedure with a standardized fixation protocol, we could successfully differentiate pre-diagnosed patients between idiopathic Parkinson’s disease and PSP, thus between PSP patients and HCs too. In PSP patients, the EyeSeeCam detected prominent impairment of both saccade velocity and amplitude. Furthermore, we show the benefits of a mobile eye-tracking device for application in clinical practice. Study V Decision-making is one of the basic cognitive processes of human behaviours and thus, also evokes a pupil dilation. Since this dilation reflects a marker for the temporal occurrence of the decision, we wondered whether individuals can read decisions from another’s pupil and thus become a mentalist. For this purpose, a modified version of the rock-paper-scissors childhood game was played with 3 prototypical opponents, while their eyes were video taped. These videos served as stimuli for further persons, who competed in rock-paper-scissors. Our results show, that reading decisions from a competitor’s pupil can be achieved and players can raise their winning probability significantly above chance. This ability does not require training but the instruction, that the time of maximum pupil dilation was indicative of the opponent’s choice. Therefore we conclude, that people could use the pupil to detect cognitive decisions in another individual, if they get explicit knowledge of the pupil’s utility. Study VI For patients with severe motor disabilities, a robust mean of communication is a crucial factor for well-being. Locked-in-Syndrome (LiS) patients suffer from quadriplegia and lack the ability of articulating their voice, though their consciousness is fully intact. While classic and incomplete LiS allows at least voluntary vertical eye movements or blinks to be used for communication, total LiS patients are not able to perform such movements. What remains, are involuntarily evoked muscle reactions, like it is the case with the pupillary response. The pupil dilation reflects enhanced cognitive or emotional processing, which we successfully observed in LiS patients. Furthermore, we created a communication system based on yes-no questions combined with the task of solving arithmetic problems during matching answer intervals, that yet invokes the most solid pupil dilation usable on a trial-by-trial basis for decoding yes or no as answers. Applied to HCs and patients with various severe motor disabilities, we provide the proof of principle that pupil responses allow communication for all tested HCs and 4/7 typical LiS patients. RĂ©sumĂ© Together, the methods established within this thesis are promising advances in measuring visual attention allocation with 3D eye-tracking in real world and in the use of pupillometry as on-line measurement of cognitive processes. The two most outstanding findings are the possibility to communicate with complete LiS patients and further a conclusive evidence that objects are the primary unit of fixation selection in natural scenes

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
    • 

    corecore