1,477 research outputs found

    Estimation of a focused object using a corneal surface image for eye-based interaction

    Get PDF
    Researchers are considering the use of eye tracking in head-mounted camera systems, such as Google’s Project Glass. Typical methods require detailed calibration in advance, but long periods of use disrupt the calibration record between the eye and the scene camera. In addition, the focused object might not be estimated even if the point-of-regard is estimated using a portable eye-tracker. Therefore, we propose a novel method for estimating the object that a user is focused upon, where an eye camera captures the reflection on the corneal surface. Eye and environment information can be extracted from the corneal surface image simultaneously. We use inverse ray tracing to rectify the reflected image and a scale-invariant feature transform to estimate the object where the point-of-regard is located. Unwarped images can also be generated continuously from corneal surface images. We consider that our proposed method could be applied to a guidance system and we confirmed the feasibility of this application in experiments that estimated the object focused upon and the point-of-regard

    Seeing the World through Your Eyes

    Full text link
    The reflective nature of the human eye is an underappreciated source of information about what the world around us looks like. By imaging the eyes of a moving person, we can collect multiple views of a scene outside the camera's direct line of sight through the reflections in the eyes. In this paper, we reconstruct a 3D scene beyond the camera's line of sight using portrait images containing eye reflections. This task is challenging due to 1) the difficulty of accurately estimating eye poses and 2) the entangled appearance of the eye iris and the scene reflections. Our method jointly refines the cornea poses, the radiance field depicting the scene, and the observer's eye iris texture. We further propose a simple regularization prior on the iris texture pattern to improve reconstruction quality. Through various experiments on synthetic and real-world captures featuring people with varied eye colors, we demonstrate the feasibility of our approach to recover 3D scenes using eye reflections.Comment: CVPR 2024. First two authors contributed equally. Project page: https://world-from-eyes.github.io

    Visual Place Recognition From Eye Reflection

    Get PDF
    The cornea in the human eye reflects incoming environmental light, which means we can obtain information about the surrounding environment from the corneal reflection in facial images. In recent years, as the quality of consumer cameras increases, this has caused privacy concerns in terms of identifying the people around the subject or where the photo is taken. This paper investigates the security risk of eye corneal reflection images: specifically, visual place recognition from eye reflection images. First, we constructed two datasets containing pairs of scene and corneal reflection images. The first dataset is taken in a virtual environment. We showed pre-captured scene images in a 180-degree surrounding display system and took corneal reflections from subjects. The second dataset is taken in an outdoor environment. We developed several visual place recognition algorithms, including CNN-based image descriptors featuring a naive Siamese network and AFD-Net combined with entire image feature representations including VLAD and NetVLAD, and compared the results. We found that AFD-Net+VLAD performed the best and was able to accurately determine the scene in 73.08% of the top-five candidate scenes. These results demonstrate the potential to estimate the location at which a facial picture was taken, which simultaneously leads to a) positive applications such as the localization of a robot while conversing with persons and b) negative scenarios including the security risk of uploading facial images to the public

    Iris Recognition: Robust Processing, Synthesis, Performance Evaluation and Applications

    Get PDF
    The popularity of iris biometric has grown considerably over the past few years. It has resulted in the development of a large number of new iris processing and encoding algorithms. In this dissertation, we will discuss the following aspects of the iris recognition problem: iris image acquisition, iris quality, iris segmentation, iris encoding, performance enhancement and two novel applications.;The specific claimed novelties of this dissertation include: (1) a method to generate a large scale realistic database of iris images; (2) a crosspectral iris matching method for comparison of images in color range against images in Near-Infrared (NIR) range; (3) a method to evaluate iris image and video quality; (4) a robust quality-based iris segmentation method; (5) several approaches to enhance recognition performance and security of traditional iris encoding techniques; (6) a method to increase iris capture volume for acquisition of iris on the move from a distance and (7) a method to improve performance of biometric systems due to available soft data in the form of links and connections in a relevant social network

    Evaluation of accurate eye corner detection methods for gaze estimation

    Get PDF
    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changing scenarios. In addition to this method, algorithms based on AAM and Harris corner detector present better accuracies than recent high performance face points tracking methods such as Intraface

    Direction Estimation Model for Gaze Controlled Systems

    Get PDF
    Detection of gaze requires estimation of the position and the relation between user’s pupil and glint. This position is mapped into the region of interest using different edge detectors by detecting the glint coordinates and further gaze direction. In this research paper, a Gaze Direction Estimation (GDE) model has been proposed for the comparative analysis of two standard edge detectors Canny and Sobel for estimating automatic detection of the glint, its coordinates and subsequently the gaze direction. The results indicate fairly good percentage of the cases where the correct glint coordinates and subsequently correct gaze direction quadrants have been estimated. These results can further be used for improving the accuracy and performance of different eye gaze based systems
    corecore