16 research outputs found

    A Self-initializing Eyebrow Tracker for Binary Switch Emulation

    Full text link
    We designed the Eyebrow-Clicker, a camera-based human computer interface system that implements a new form of binary switch. When the user raises his or her eyebrows, the binary switch is activated and a selection command is issued. The Eyebrow-Clicker thus replaces the "click" functionality of a mouse. The system initializes itself by detecting the user's eyes and eyebrows, tracks these features at frame rate, and recovers in the event of errors. The initialization uses the natural blinking of the human eye to select suitable templates for tracking. Once execution has begun, a user therefore never has to restart the program or even touch the computer. In our experiments with human-computer interaction software, the system successfully determined 93% of the time when a user raised his eyebrows.Office of Naval Research; National Science Foundation (IIS-0093367

    Facial expression recognition for a sociable robot

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 53-54).In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.by Wing Hei Iris Tang.M.Eng

    Facial Action Recognition for Facial Expression Analysis from Static Face Images

    No full text
    Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved

    Facial Expression Recognition in the Presence of Head Motion

    Get PDF

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences

    Full text link

    Assessing the match performance of non-ideal operational facial images using 3D image data.

    Get PDF
    Biometric attributes are unique characteristics specific to an individual, which can be used in automated identification schemes. There have been considerable advancements in the field of face recognition recently, but challenges still exist. One of these challenges is pose-variation, specifically, roll, pitch, and yaw variations away from a frontal image. The goal of this problem report is to assess the improvement of facial recognition performance obtainable by commercial pose-correction software. This was done using pose-corrected images obtained in two ways: 1) non-frontal images generated and corrected using 3D facial scans (pseudo-pose-correction) and 2) the same non-frontal images corrected using FaceVACs DBScan. Two matchers were used to evaluate matching performance namely Cognitec FaceVACs and MegaMatcher 5.0 SDK. A set of matching experiments were conducted using frontal, non-frontal and pose-corrected images to assess the improvement in matching performance, including: 1. Frontal (probe) to Frontal (gallery) images, to generate the baseline 2. Non-ideal pose-varying (probe) to frontal (gallery) 3. Pseudo-pose-corrected (probe) to frontal (gallery) 4. Auto-pose-corrected (probe) to frontal (gallery). Cumulative match characteristics curves (CMC) are used to evaluate the performance of the match scores generated. These matching results have shown better performance in case of pseudo-pose-corrected images compared to the non-frontal images, where the rank accuracy is 100% for the angles which were not detected by the matchers in the non-frontal case. Of the two commercial matchers, Cognitec, which is software optimized for non-frontal models, has shown a better performance in detection of face with angular rotations. MegaMatcher, which is not a pose-correction matcher, was unable to detect greater angles of rotation which are 50° and 60° in pitch, greater than 40° for yaw and for coupled pitch/yaw it was unable to detect 4 out of 8 combinations. The requirements of the facial recognition application will influence the decision to implement pose correction tools

    Meticulously detailed eye region model and its application to analysis of facial images

    Full text link

    Machine Analysis of Facial Expressions

    Get PDF

    Analysis of facial expressions in children: Experiments based on the DB Child Affective Facial Expression (CAFE)

    Get PDF
    Analysis of facial expressions in children of 2 to 8 years old, and identification of emotions.Language: English
    corecore