416 research outputs found

    Gazedirector: Fully articulated eye gaze redirection in video

    Get PDF
    We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior

    FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

    No full text
    We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call

    User interface for a better eye contact in videoconferencing

    Get PDF

    Model-free head pose estimation based on shape factorisation and particle filtering

    Get PDF
    This work forms part of the project Eye-Communicate funded by the Malta Council for Science and Technology through the National Research & Innovation Programme (2012) under Research Grant No. R&I-2012-057.Head pose estimation is essential for several applications and is particularly required for head pose-free eye-gaze tracking where estimation of head rotation permits free head movement during tracking. While the literature is broad, the accuracy of recent vision-based head pose estimation methods is contingent upon the availability of training data or accurate initialisation and tracking of specific facial landmarks. In this paper, we propose a method to estimate the head pose in real time from the trajectories of a set of feature points spread randomly over the face region, without requiring a training phase or model-fitting of specific facial features. Conversely, without seeking specific facial landmarks, our method exploits the sparse 3-dimensional shape of the surface of interest, recovered via shape and motion factorisation, in combination with particle filtering to correct mistracked feature points and improve upon an initial estimation of the 3-dimensional shape during tracking. In comparison with two additional methods, quantitative results obtained through our model- and landmark-free method yield a reduction in the head pose estimation error for a wide range of head rotation angles.peer-reviewe

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    A Novel Authentication Method Using Multi-Factor Eye Gaze

    Get PDF
    A method for novel, rapid and robust one-step multi-factor authentication of a user is presented, employing multi-factor eye gaze. The mobile environment presents challenges that render the conventional password model obsolete. The primary goal is to offer an authentication method that competitively replaces the password, while offering improved security and usability. This method and apparatus combine the smooth operation of biometric authentication with the protection of knowledge based authentication to robustly authenticate a user and secure information on a mobile device in a manner that is easily used and requires no external hardware. This work demonstrates a solution comprised of a pupil segmentation algorithm, gaze estimation, and an innovative application that allows a user to authenticate oneself using gaze as the interaction medium

    Noise Challenges in Monomodal Gaze Interaction

    Get PDF
    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality of life via dedicated interface tools especially tailored to the users ’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eyetracking systems and there are many available on the market. Even though gaze tracking technologies have undergone dramatic improvements over the past years, the systems are still very imprecise. This thesis deals with current challenges of mono-modal gaze interaction and aims at improving access to technology and interface control for users who are limited to the eyes only. Low-cost equipment in eye tracking contributes toward improved affordability but potentially at the cost of introducing more noise in the system due to the lower quality of hardware. This implies that methods of dealing with noise and creative approaches towards getting the best out of the data stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost mono-modal gaze tracking and research in the field:- An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance evaluation.- Development and evaluation of a novel innovative 3D typing system with high tolerance to noise that is based on continuous panning and zooming.- Development and evaluation of novel selection tools that compensate for noisy input during small-target selections in modern GUIs. This thesis may be of particular interest for those working on the use of eye trackers for gaze interaction and how to deal with reduced data quality. The work in this thesis is accompanied by several software applications developed for the research projects that can be freely downloaded from the eyeInteract appstore 1

    A deep learning palpebral fissure segmentation model in the context of computer user monitoring

    Get PDF
    The intense use of computers and visual terminals is a daily practice for many people. As a consequence, there are frequent complaints of visual and non-visual symptoms, such as headaches and neck pain. These symptoms make up Computer Vision Syndrome and among the factors related to this syndrome are: the distance between the user and the screen, the number of hours of use of the equipment and the reduction in the blink rate, and also the number of incomplete blinks while using the device. Although some of these items can be controlled by ergonomic measures, controlling blinks and their efficiency is more complex. A considerable number of studies have looked at measuring blinks, but few have dealt with the presence of incomplete blinks. Conventional measurement techniques have limitations when it comes to detecting and analyzing the completeness of blinks, especially due to the different eye and blink characteristics of individuals, as well as the position and movement of the user. Segmenting the palpebral fissure can be a first step towards solving this problem, by characterizing individuals well regardless of these factors. This work investigates with the development of Deep Learning models to perform palpebral fissure segmentation in situations where the eyes cover a small region of the images, such as images from a computer webcam. The segmentation of the palpebral fissure can be a first step in solving this problem, characterizing individuals well regardless of these factors. Training, validation and test sets were generated based on the CelebAMask-HQ and Closed Eyes in the Wild datasets. Various machine learning techniques are used, resulting in a final trained model with a Dice Coefficient metric close to 0.90 for the test data, a result similar to that obtained by models trained with images in which the eye region occupies most of the image.A utilização intensa de computadores e terminais visuais é algo cotidiano para muitas pessoas. Como consequência, queixas com sintomas visuais e não visuais, como dores de cabeça e no pescoço, são frequentes. Esses sintomas compõem a Síndrome da visão de computador e entre os fatores relacionados a essa síndrome estão: a distância entre o usuário e a tela, o número de horas de uso do equipamento e a redução da taxa de piscadas, e, também, o número de piscadas incompletas, durante a utilização do dispositivo. Ainda que alguns desses itens possam ser controlados por medidas ergonômicas, o controle das piscadas e a eficiência dessas é mais complexo. Um número considerável de estudos abordou a medição de piscadas, porém, poucos trataram da presença de piscadas incompletas. As técnicas convencionais de medição apresentam limitações para detecção e análise completeza das piscadas, em especial devido as diferentes características de olhos e de piscadas dos indivíduos, e ainda, pela posição e movimentação do usuário. A segmentação da fissura palpebral pode ser um primeiro passo na resolução desse problema, caracterizando bem os indivíduos independentemente desses fatores. Este trabalho aborda o desenvolvimento de modelos de Deep Learning para realizar a segmentação de fissura palpebral em situações em que os olhos cobrem uma região pequena das imagens, como são as imagens de uma webcam de computador. Foram gerados conjuntos de treinamento, validação e teste com base nos conjuntos de dados CelebAMask-HQ e Closed Eyes in the Wild. São utilizadas diversas técnicas de aprendizado de máquina, resultando em um modelo final treinado com uma métrica Coeficiente Dice próxima a 0,90 para os dados de teste, resultado similar ao obtido por modelos treinados com imagens nas quais a região dos olhos ocupa a maior parte da imagem
    corecore