4 research outputs found

    Supervised descent method (SDM) applied to accurate pupil detection in off-the-shelf eye tracking systems

    Get PDF
    The precise detection of pupil/iris center is key to estimate gaze accurately. This fact becomes specially challenging in low cost frameworks in which the algorithms employed for high performance systems fail. In the last years an outstanding effort has been made in order to apply training-based methods to low resolution images. In this paper, Supervised Descent Method (SDM) is applied to GI4E database. The 2D landmarks employed for training are the corners of the eyes and the pupil centers. In order to validate the algorithm proposed, a cross validation procedure is performed. The strategy employed for the training allows us to affirm that our method can potentially outperform the state of the art algorithms applied to the same dataset in terms of 2D accuracy. The promising results encourage to carry on in the study of training-based methods for eye tracking.Spanish Ministry of Economy,Industry and Competitiveness, contracts TIN2014-52897-R and TIN2017-84388-

    SeTA: semiautomatic tool for annotation of eye tracking images

    Get PDF
    Availability of large scale tagged datasets is a must in the field of deep learning applied to the eye tracking challenge. In this paper, the potential of Supervised-Descent-Method (SDM) as a semiautomatic labelling tool for eye tracking images is shown. The objective of the paper is to evidence how the human effort needed for manually labelling large eye tracking datasets can be radically reduced by the use of cascaded regressors. Different applications are provided in the fields of high and low resolution systems. An iris/pupil center labelling is shown as example for low resolution images while a pupil contour points detection is demonstrated in high resolution. In both cases manual annotation requirements are drastically reduced.Spanish Ministry of Science, Innovation and Universities, contract TIN2017-84388-

    A Framework to Estimate the Key Point Within an Object Based on a Deep Learning Object Detection

    Get PDF
    Automatic identification of key points within objects is crucial in various application domains. This paper presents a novel framework for accurately estimating the key point within an object by leveraging deep neural network-based object detection. The proposed framework is built upon a training dataset annotated with four non-overlapping bounding boxes, one of which shares a coordinate with the key point. These bounding boxes collectively cover the entire object, enabling automatic annotation if region annotations around the key point exist. The trained object detector is then utilized to generate detection results, which are subsequently post-processed to estimate the key point. To validate the effectiveness of the framework, experiments were conducted using two distinct datasets: cross-sectional images of a parawood log and pupil images. The experimental results demonstrate that our proposed framework surpasses previously proposed approaches in terms of precision, recall, F1-score, and other domain-specific metrics. The improvement in performance can be attributed to the unique annotation strategy and the fusion of object detection and key point estimation within a unified deep learning framework. The contribution of this study lies in introducing a novel framework for closely estimating key points within objects based on deep neural network-based object detection. By leveraging annotated training data and post-processing techniques, our approach achieves superior performance compared to existing methods. This work fills a critical gap in the field by integrating object detection and key point estimation, which has received limited attention in previous research. Our framework provides valuable insights and advancements in key point estimation techniques, offering potential applications in precise object analysis and understanding. Doi: 10.28991/HIJ-2023-04-01-08 Full Text: PD
    corecore