242 research outputs found

    Facial Component Detection in Thermal Imagery

    Get PDF
    This paper studies the problem of detecting facial components in thermal imagery (specifically eyes, nostrils and mouth). One of the immediate goals is to enable the automatic registration of facial thermal images. The detection of eyes and nostrils is performed using Haar features and the GentleBoost algorithm, which are shown to provide superior detection rates. The detection of the mouth is based on the detections of the eyes and the nostrils and is performed using measures of entropy and self similarity. The results show that reliable facial component detection is feasible using this methodology, getting a correct detection rate for both eyes and nostrils of 0.8. A correct eyes and nostrils detection enables a correct detection of the mouth in 65% of closed-mouth test images and in 73% of open-mouth test images

    A hybrid technique for face detection in color images

    Get PDF
    In this paper, a hybrid technique for face detection in color images is presented. The proposed technique combines three analysis models, namely skin detection, automatic eye localization, and appearance-based face/nonface classification. Using a robust histogram-based skin detection model, skin-like pixels are first identified in the RGB color space. Based on this, face bounding-boxes are extracted from the image. On detecting a face bounding-box, approximate positions of the candidate mouth feature points are identified using the redness property of image pixels. A region-based eye localization step, based on the detected mouth feature points, is then applied to face bounding-boxes to locate possible eye feature points in the image. Based on the distance between the detected eye feature points, face/non-face classification is performed over a normalized search area using the Bayesian discriminating feature (BDF) analysis method. Some subjective evaluation results are presented on images taken using digital cameras and a Webcam, representing both indoor and outdoor scenes

    Dictionary-based lip reading classification

    Get PDF
    Visual lip reading recognition is an essential stage in many multimedia systems such as “Audio Visual Speech Recognition” [6], “Mobile Phone Visual System for deaf people”, “Sign Language Recognition System”, etc. The use of lip visual features to help audio or hand recognition is appropriate because this information is robust to acoustic noise. In this paper, we describe our work towards developing a robust technique for lip reading classification that extracts the lips in a colour image by using EMPCA feature extraction and k-nearest-neighbor classification. In order to reduce the dimensionality of the feature space the lip motion is characterized by three templates that are modelled based on different mouth shapes: closed template, semi-closed template, and wideopen template. Our goal is to classify each image sequence based on the distribution of the three templates and group the words into different clusters. The words that form the database were grouped into three different clusters as follows: group1(‘I’, ‘high’, ‘lie’, ‘hard’, ‘card’, ‘bye’), group2(‘you, ‘owe’, ‘word’), group3(‘bird’)

    An Improved Fatigue Detection System Based on Behavioral Characteristics of Driver

    Full text link
    In recent years, road accidents have increased significantly. One of the major reasons for these accidents, as reported is driver fatigue. Due to continuous and longtime driving, the driver gets exhausted and drowsy which may lead to an accident. Therefore, there is a need for a system to measure the fatigue level of driver and alert him when he/she feels drowsy to avoid accidents. Thus, we propose a system which comprises of a camera installed on the car dashboard. The camera detect the driver's face and observe the alteration in its facial features and uses these features to observe the fatigue level. Facial features include eyes and mouth. Principle Component Analysis is thus implemented to reduce the features while minimizing the amount of information lost. The parameters thus obtained are processed through Support Vector Classifier for classifying the fatigue level. After that classifier output is sent to the alert unit.Comment: 4 pages, 2 figures, edited version of published paper in IEEE ICITE 201

    Fast MicroSleep and Yawning Detections to Assess Driver’s Vigilance Level

    Get PDF
    Driver hypovigilance, often caused by fatigue and/or drowsiness, receives increasing attention in the last years; especially after it became evident that hypovigilance is a one of the major factor causing traffic accidents. Monitoring and detecting driver hypovigilance could contribute significantly to improve road traffic safety. This paper proposes fast methods to identify drowsiness and fatigue using respectively microsleep and yawning detections. In this study, the proposed scheme begins by a face detection using local Successive Mean Quantization Transform (SMQT) features and split up Sparse Network of Winnows (SNoW) classifier. After performing face detection, the novel approach for eye/mouth detection, based on Circular Hough Transform (CHT), is applied on eyes and mouth extracted regions. Our proposed methods works in real-time and yield a high detection rates whether for drowsiness or fatigue detections

    An Approach for Improving Automatic Mouth Emotion Recognition

    Full text link
    The study proposes and tests a technique for automated emotion recognition through mouth detection via Convolutional Neural Networks (CNN), meant to be applied for supporting people with health disorders with communication skills issues (e.g. muscle wasting, stroke, autism, or, more simply, pain) in order to recognize emotions and generate real-time feedback, or data feeding supporting systems. The software system starts the computation identifying if a face is present on the acquired image, then it looks for the mouth location and extracts the corresponding features. Both tasks are carried out using Haar Feature-based Classifiers, which guarantee fast execution and promising performance. If our previous works focused on visual micro-expressions for personalized training on a single user, this strategy aims to train the system also on generalized faces data sets

    Deep Multimodal Learning for Audio-Visual Speech Recognition

    Full text link
    In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of 41%41\% under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of 35.83%35.83\% demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of 34.03%34.03\%.Comment: ICASSP 201

    IR-Depth Face Detection and Lip Localization Using Kinect V2

    Get PDF
    Face recognition and lip localization are two main building blocks in the development of audio visual automatic speech recognition systems (AV-ASR). In many earlier works, face recognition and lip localization were conducted in uniform lighting conditions with simple backgrounds. However, such conditions are seldom the case in real world applications. In this paper, we present an approach to face recognition and lip localization that is invariant to lighting conditions. This is done by employing infrared and depth images captured by the Kinect V2 device. First we present the use of infrared images for face detection. Second, we use the face’s inherent depth information to reduce the search area for the lips by developing a nose point detection. Third, we further reduce the search area by using a depth segmentation algorithm to separate the face from its background. Finally, with the reduced search range, we present a method for lip localization based on depth gradients. Experimental results demonstrated an accuracy of 100% for face detection, and 96% for lip localization
    corecore