13 research outputs found

    Light, Information and Perception inside Historical Buildings. A Case Study

    Get PDF
    Many literature studies demonstrated that technologies belonging to cognitive neuroscience can be used in museums. Physics, optics, thermodynamics and neurosciences applications can provide an important support to applied and practical research for lighting our Cultural Heritage. In our research we provided a multidisciplinary integrated approach for the study of the luminous climate inside a historical building: Villa La Quiete in Florence is the case study. The quantitative measurements of the relationship between observer and artworks were performed, with eye-tracking technique application. The Information Theory, read on a thermodynamic basis, the ergonomy of the multi-perceptive learning and optical physics, were the fundamental tools for the assessment of the correct light sources in terms of spectral emission and colour light temperature. The eye-tracking technique combined with the results of lighting parameter quantification, allowed checking how the colour of light changes the observer's perception, and from the information theory point of view, the communication and interpretation process of the signals due to different lighting. At the same time, it was also possible to measure by assessing the perceptive data of the visual path, and thus the neg-entropy, the informative content of the interaction of light with works of art

    Accurate pupil center detection in off-the-shelf eye tracking systems using convolutional neural networks

    Get PDF
    Remote eye tracking technology has suffered an increasing growth in recent years due to its applicability in many research areas. In this paper, a video-oculography method based on convolutional neural networks (CNNs) for pupil center detection over webcam images is proposed. As the first contribution of this work and in order to train the model, a pupil center manual labeling procedure of a facial landmark dataset has been performed. The model has been tested over both real and synthetic databases and outperforms state-of-the-art methods, achieving pupil center estimation errors below the size of a constricted pupil in more than 95% of the images, while reducing computing time by a 8 factor. Results show the importance of use high quality training data and well-known architectures to achieve an outstanding performance.This research was funded by Public University of Navarra (Pre-doctoral research grant) and by the Spanish Ministry of Science and Innovation under Contract 'Challenges of Eye Tracking Off-the-Shelf (ChETOS)' with reference: PID2020-118014RB-I0

    Precise Non-Intrusive Real-Time Gaze Tracking System for Embedded Setups

    Get PDF
    This paper describes a non-intrusive real-time gaze detection system, characterized by a precise determination of a subject's pupil centre. A narrow field-of-view camera (NFV), focused on one of the subject's eyes follows the head movements in order to keep the pupil centred in the image. When a tracking error is observed, feedback provided by a second camera, in this case a wide field-of-view (WFV) camera, allows quick recovery of the tracking process. Illumination is provided by four infrared LED blocks synchronised with the electronic shutter of the eye camera. The characteristic shape of corneal glints produced by these illuminators allows optimizing the image processing algorithms for gaze detection developed for this system. The illumination power used in this system has been limited to well below maximum recommended levels. After an initial calibration procedure, the line of gaze is determined starting from the vector defined by the pupil centre and a valid glint. The glints are validated using the iris outline to avoid glint distortion produced by changes in the curvature on the ocular globe. In order to minimize measurement error in the pupil-glint vector, algorithms are proposed to determine the pupil centre at sub-pixel resolution. Although the paper describes a desk-mounted prototype, the final implementation is to be installed on board of a conventional car as an embedded system to determine the line of gaze of the driver

    Uncertainty visualization of gaze estimation to support operator-controlled calibration

    Get PDF
          In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants’ results. The Mean Angular Error is 0.25° (SD=0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view.

    An investigation of the distribution of gaze estimation errors in head mounted gaze trackers using polynomial functions

    Get PDF
    Second order polynomials are commonly used for estimating the point-of-gaze in head-mounted eye trackers. Studies in remote (desktop) eye trackers show that although some non-standard 3rd order polynomial models could provide better accuracy, high-order polynomials do not necessarily provide better results. Different than remote setups though, where gaze is estimated over a relatively narrow field-of-view surface (e.g. less than 30x20 degrees on typical computer displays), head-mounted gaze trackers (HMGT) are often desired to cover a relatively wider field-of-view to make sure that the gaze is detected in the scene image even for extreme eye angles. In this paper we investigate the behavior of the gaze estimation error distribution throughout the image of the scene camera when using polynomial functions. Using simulated scenarios, we describe effects of four different sources of error: interpolation, extrapolation, parallax, and radial distortion. We show that the use of third order polynomials result in more accurate gaze estimates in HMGT, and that the use of wide angle lenses might be beneficial in terms of error reduction

    The mean point of vergence is biased under projection

    Get PDF
    The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment

    High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods

    Get PDF
    This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range
    corecore