216 research outputs found

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation

    Ultrasonic testing of adhesive bonds of thick composites with applications to wind turbine blades

    Get PDF
    This paper discusses the use of pulse echo based ultrasonic testing for the inspection of adhesive bonds between very thick composite plates (thickness greater than 30 mm). Large wind turbine blades use very thick composite plates for its main structural members, and the inspection of adhesive bond-line is very vital. A wide gamut of samples was created by changing the thickness of plate and the adhesive. The influence of experimental parameters such as frequency on measurement is studied in this paper. Two different frequencies are chosen, and the measurement error bars are determined experimentally. T-Ray measurements were used to verify and correct results, and conclusions are made based on the combined results

    The effect of temperature upon facet number in the bar-eyed mutant of Drosophila

    Get PDF
    Thesis (PhD)--University of Illinois, 1919TypescriptVitaIncludes bibliographical reference

    Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems

    Get PDF
    The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.Spanish Ministry of Economy,Industry and Competitiveness, contracts TIN2014-52897-R and TIN2017-84388-
    corecore