103 research outputs found

    Real-time Multi-person Eyeblink Detection in the Wild for Untrimmed Video

    Full text link
    Real-time eyeblink detection in the wild can widely serve for fatigue detection, face anti-spoofing, emotion analysis, etc. The existing research efforts generally focus on single-person cases towards trimmed video. However, multi-person scenario within untrimmed videos is also important for practical applications, which has not been well concerned yet. To address this, we shed light on this research field for the first time with essential contributions on dataset, theory, and practices. In particular, a large-scale dataset termed MPEblink that involves 686 untrimmed videos with 8748 eyeblink events is proposed under multi-person conditions. The samples are captured from unconstrained films to reveal "in the wild" characteristics. Meanwhile, a real-time multi-person eyeblink detection method is also proposed. Being different from the existing counterparts, our proposition runs in a one-stage spatio-temporal way with end-to-end learning capacity. Specifically, it simultaneously addresses the sub-tasks of face detection, face tracking, and human instance-level eyeblink detection. This paradigm holds 2 main advantages: (1) eyeblink features can be facilitated via the face's global context (e.g., head pose and illumination condition) with joint optimization and interaction, and (2) addressing these sub-tasks in parallel instead of sequential manner can save time remarkably to meet the real-time running requirement. Experiments on MPEblink verify the essential challenges of real-time multi-person eyeblink detection in the wild for untrimmed video. Our method also outperforms existing approaches by large margins and with a high inference speed.Comment: Accepted by CVPR 202

    mEBAL2 Database and Benchmark: Image-based Multispectral Eyeblink Detection

    Full text link
    This work introduces a new multispectral database and novel approaches for eyeblink detection in RGB and Near-Infrared (NIR) individual images. Our contributed dataset (mEBAL2, multimodal Eye Blink and Attention Level estimation, Version 2) is the largest existing eyeblink database, representing a great opportunity to improve data-driven multispectral approaches for blink detection and related applications (e.g., attention level estimation and presentation attack detection in face biometrics). mEBAL2 includes 21,100 image sequences from 180 different students (more than 2 million labeled images in total) while conducting a number of e-learning tasks of varying difficulty or taking a real course on HTML initiation through the edX MOOC platform. mEBAL2 uses multiple sensors, including two Near-Infrared (NIR) and one RGB camera to capture facial gestures during the execution of the tasks, as well as an Electroencephalogram (EEG) band to get the cognitive activity of the user and blinking events. Furthermore, this work proposes a Convolutional Neural Network architecture as benchmark for blink detection on mEBAL2 with performances up to 97%. Different training methodologies are implemented using the RGB spectrum, NIR spectrum, and the combination of both to enhance the performance on existing eyeblink detectors. We demonstrate that combining NIR and RGB images during training improves the performance of RGB eyeblink detectors (i.e., detection based only on a RGB image). Finally, the generalization capacity of the proposed eyeblink detectors is validated in wilder and more challenging environments like the HUST-LEBW dataset to show the usefulness of mEBAL2 to train a new generation of data-driven approaches for eyeblink detection.Comment: This paper is under consideration at Pattern Recognition Letter

    Multimodal Human Eye Blink Recognition Using Z-score Based Thresholding and Weighted Features

    Get PDF
    A novel real-time multimodal eye blink detection method using an amalgam of five unique weighted features extracted from the circle boundary formed from the eye landmarks is proposed. The five features, namely (Vertical Head Positioning, Orientation Factor, Proportional Ratio, Area of Intersection, and Upper Eyelid Radius), provide imperative gen (z score threshold) accurately predicting the eye status and thus the blinking status. An accurate and precise algorithm employing the five weighted features is proposed to predict eye status (open/close). One state-of-the-art dataset ZJU (eye-blink), is used to measure the performance of the method. Precision, recall, F1-score, and ROC curve measure the proposed method performance qualitatively and quantitatively. Increased accuracy (of around 97.2%) and precision (97.4%) are obtained compared to other existing unimodal approaches. The efficiency of the proposed method is shown to outperform the state-of-the-art methods

    Spatiotemporal CNN with Pyramid Bottleneck Blocks: Application to eye blinking detection

    Get PDF
    Eye blink detection is a challenging problem that many researchers are working on because it has the potential to solve many facial analysis tasks, such as face anti-spoofing, driver drowsiness detection, and some health disorders. There have been few attempts to detect blinking in the wild scenario, while most of the work has been done under controlled conditions. Moreover, current learning approaches are designed to process sequences that contain only a single blink ignoring the case of the presence of multiple eye blinks. In this work, we propose a fast framework for eye blink detection and eye blink verification that can effectively extract multiple blinks from image sequences considering several challenges such as lighting changes, variety of poses, and change in appearance. The proposed framework employs fast landmarks detector to extract multiple facial key points including the ones that identify the eye regions. Then, an SVD-based method is proposed to extract the potential eye blinks in a moving time window that is updated with new images every second. Finally, the detected blink candidates are verified using a 2D Pyramidal Bottleneck Block Network (PBBN). We also propose an alternative approach that uses a sequence of frames instead of an image as input and employs a continuous 3D PBBN that follows most of the state-of-the-art approaches schemes. Experimental results show the better performance of the proposed approach compared to the state-of-the-art approaches

    A study on tiredness assessment by using eye blink detection

    Get PDF
    In this paper, the loss of attention of automotive drivers is studied by using eye blink detection. Facial landmark detection for detecting eye is explored. Afterward, eye blink is detected using Eye Aspect Ratio. By comparing the time of eye closure to a particular period, the driver’s tiredness is decided. The total number of eye blinks in a minute is counted to detect drowsiness. Calculation of total eye blinks in a minute for the driver is done, then compared it with a known standard value. If any of the above conditions fulfills, the system decides the driver is unconscious. A total of 120 samples were taken by placing the light source front, back, and side. There were 40 samples for each position of the light source. The maximum error rate occurred when the light source was placed back with a 15% error rate. The best scenario was 7.5% error rate where the light source was placed front side. The eye blinking process gave an average error of 11.67% depending on the various position of the light source. Another 120 samples were taken at a different time of the day for calculating total eye blink in a minute. The maximum number of blinks was in the morning with an average blink rate of 5.78 per minute, and the lowest number of blink rate was in midnight with 3.33% blink rate. The system performed satisfactorily and achieved the eye blink pattern with 92.7% accuracy

    A deep learning palpebral fissure segmentation model in the context of computer user monitoring

    Get PDF
    The intense use of computers and visual terminals is a daily practice for many people. As a consequence, there are frequent complaints of visual and non-visual symptoms, such as headaches and neck pain. These symptoms make up Computer Vision Syndrome and among the factors related to this syndrome are: the distance between the user and the screen, the number of hours of use of the equipment and the reduction in the blink rate, and also the number of incomplete blinks while using the device. Although some of these items can be controlled by ergonomic measures, controlling blinks and their efficiency is more complex. A considerable number of studies have looked at measuring blinks, but few have dealt with the presence of incomplete blinks. Conventional measurement techniques have limitations when it comes to detecting and analyzing the completeness of blinks, especially due to the different eye and blink characteristics of individuals, as well as the position and movement of the user. Segmenting the palpebral fissure can be a first step towards solving this problem, by characterizing individuals well regardless of these factors. This work investigates with the development of Deep Learning models to perform palpebral fissure segmentation in situations where the eyes cover a small region of the images, such as images from a computer webcam. The segmentation of the palpebral fissure can be a first step in solving this problem, characterizing individuals well regardless of these factors. Training, validation and test sets were generated based on the CelebAMask-HQ and Closed Eyes in the Wild datasets. Various machine learning techniques are used, resulting in a final trained model with a Dice Coefficient metric close to 0.90 for the test data, a result similar to that obtained by models trained with images in which the eye region occupies most of the image.A utilização intensa de computadores e terminais visuais é algo cotidiano para muitas pessoas. Como consequência, queixas com sintomas visuais e não visuais, como dores de cabeça e no pescoço, são frequentes. Esses sintomas compõem a Síndrome da visão de computador e entre os fatores relacionados a essa síndrome estão: a distância entre o usuário e a tela, o número de horas de uso do equipamento e a redução da taxa de piscadas, e, também, o número de piscadas incompletas, durante a utilização do dispositivo. Ainda que alguns desses itens possam ser controlados por medidas ergonômicas, o controle das piscadas e a eficiência dessas é mais complexo. Um número considerável de estudos abordou a medição de piscadas, porém, poucos trataram da presença de piscadas incompletas. As técnicas convencionais de medição apresentam limitações para detecção e análise completeza das piscadas, em especial devido as diferentes características de olhos e de piscadas dos indivíduos, e ainda, pela posição e movimentação do usuário. A segmentação da fissura palpebral pode ser um primeiro passo na resolução desse problema, caracterizando bem os indivíduos independentemente desses fatores. Este trabalho aborda o desenvolvimento de modelos de Deep Learning para realizar a segmentação de fissura palpebral em situações em que os olhos cobrem uma região pequena das imagens, como são as imagens de uma webcam de computador. Foram gerados conjuntos de treinamento, validação e teste com base nos conjuntos de dados CelebAMask-HQ e Closed Eyes in the Wild. São utilizadas diversas técnicas de aprendizado de máquina, resultando em um modelo final treinado com uma métrica Coeficiente Dice próxima a 0,90 para os dados de teste, resultado similar ao obtido por modelos treinados com imagens nas quais a região dos olhos ocupa a maior parte da imagem

    CLERA: A Unified Model for Joint Cognitive Load and Eye Region Analysis in the Wild

    Full text link
    Non-intrusive, real-time analysis of the dynamics of the eye region allows us to monitor humans' visual attention allocation and estimate their mental state during the performance of real-world tasks, which can potentially benefit a wide range of human-computer interaction (HCI) applications. While commercial eye-tracking devices have been frequently employed, the difficulty of customizing these devices places unnecessary constraints on the exploration of more efficient, end-to-end models of eye dynamics. In this work, we propose CLERA, a unified model for Cognitive Load and Eye Region Analysis, which achieves precise keypoint detection and spatiotemporal tracking in a joint-learning framework. Our method demonstrates significant efficiency and outperforms prior work on tasks including cognitive load estimation, eye landmark detection, and blink estimation. We also introduce a large-scale dataset of 30k human faces with joint pupil, eye-openness, and landmark annotation, which aims to support future HCI research on human factors and eye-related analysis.Comment: ACM Transactions on Computer-Human Interactio

    On Motion Analysis in Computer Vision with Deep Learning: Selected Case Studies

    Get PDF
    Motion analysis is one of the essential enabling technologies in computer vision. Despite recent significant advances, image-based motion analysis remains a very challenging problem. This challenge arises because the motion features are extracted directory from a sequence of images without any other meta data information. Extracting motion information (features) is inherently more difficult than in other computer vision disciplines. In a traditional approach, the motion analysis is often formulated as an optimisation problem, with the motion model being hand-crafted to reflect our understanding of the problem domain. The critical element of these traditional methods is a prior assumption about the model of motion believed to represent a specific problem. Data analytics’ recent trend is to replace hand-crafted prior assumptions with a model learned directly from observational data with no, or very limited, prior assumptions about that model. Although known for a long time, these approaches, based on machine learning, have been shown competitive only very recently due to advances in the so-called deep learning methodologies. This work's key aim has been to investigate novel approaches, utilising the deep learning methodologies, for motion analysis where the motion model is learned directly from observed data. These new approaches have focused on investigating the deep network architectures suitable for the effective extraction of spatiotemporal information. Due to the estimated motion parameters' volume and structure, it is frequently difficult or even impossible to obtain relevant ground truth data. Missing ground truth leads to choose the unsupervised learning methodologies which is usually represents challenging choice to utilize in already challenging high dimensional motion representation of the image sequence. The main challenge with unsupervised learning is to evaluate if the algorithm can learn the data model directly from the data only without any prior knowledge presented to the deep learning model during In this project, an emphasis has been put on the unsupervised learning approaches. Owning to a broad spectrum of computer vision problems and applications related to motion analysis, the research reported in the thesis has focused on three specific motion analysis challenges and corresponding practical case studies. These include motion detection and recognition, as well as 2D and 3D motion field estimation. Eyeblinks quantification has been used as a case study for the motion detection and recognition problem. The approach proposed for this problem consists of a novel network architecture processing weakly corresponded images in an action completion regime with learned spatiotemporal image features fused using cascaded recurrent networks. The stereo-vision disparity estimation task has been selected as a case study for the 2D motion field estimation problem. The proposed method directly estimates occlusion maps using novel convolutional neural network architecture that is trained with a custom-designed loss function in an unsupervised manner. The volumetric data registration task has been chosen as a case study for the 3D motion field estimation problem. The proposed solution is based on the 3D CNN, with a novel architecture featuring a Generative Adversarial Network used during training to improve network performance for unseen data. All the proposed networks demonstrated a state-of-the-art performance compared to other corresponding methods reported in the literature on a number of assessment metrics. In particular, the proposed architecture for 3D motion field estimation has shown to outperform the previously reported manual expert-guided registration methodology
    • …
    corecore