61,720 research outputs found

    Моделювання та аналіз мімічних проявів емоцій

    No full text
    This work presents a computer technology for the emotion detection on human faces. The model is offered as a linear combination of some basic emotions. Parameters of the model are found with the help of the method of deformable templates using NURBS curves. An arbitrary emotion on a human face is detected as a convex combination of basic emotions

    Automated Students Attendance System

    Get PDF
    The Automated Students' Attendance System is a system that takes the attendance of students in a class automatically. The system aims to improve the current attendance system that is done manually. This work presents the computerized system of automated students' attendance system to implement genetic algorithms in a face recognition system. The extraction of face template particularly the T-zone (symmetrical between the eyes, nose and mouth) is performed based on face detection using specific HSV colour space ranges followed by template matching. Two types of templates are used; one on edge detection and another on the intensity plane in YIQ colour space. Face recognition with genetic algorithms will be performed to achieve an automated students' attendance system. With the existence of this attendance system, the occurrence of truancy could be reduced tremendously

    3D Model Based Pose Invariant Face Recognition from a Single Frontal View

    Get PDF
    This paper proposes a 3D model based pose invariant face recognition method that can recognize a face of a large rotation angle from its single nearly frontal view. The proposed method achieves the goal by using an analytic-to-holistic approach and a novel algorithm for estimation of ear points. Firstly, the proposed method achieves facial feature detection, in which an edge map based algorithm is developed to detect the ear points. Based on the detected facial feature points 3D face models are computed and used to achieve pose estimation. Then we reconstruct the facial feature points' locations and synthesize facial feature templates in frontal view using computed face models and estimated poses. Finally, the proposed method achieves face recognition by corresponding template matching and corresponding geometric feature matching. Experimental results show that the proposed face recognition method is robust for pose variations including both seesaw rotations and sidespin rotations

    Ingroup and outgroup differences in face detection

    Get PDF
    Humans show improved recognition for faces from their own social group relative to faces from another social group. Yet before faces can be recognized, they must first be detected in the visual field. Here, we tested whether humans also show an ingroup bias at the earliest stage of face processing – the point at which the presence of a face is first detected. To this end, we measured viewers' ability to detect ingroup (Black and White) and outgroup faces (Asian, Black, and White) in everyday scenes. Ingroup faces were detected with greater speed and accuracy relative to outgroup faces (Experiment 1). Removing face hue impaired detection gen- erally, but the ingroup detection advantage was undimin- ished (Experiment 2). This same pattern was replicated by a detection algorithm using face templates derived from human data (Experiment 3). These findings demonstrate that the established ingroup bias in face processing can ex- tend to the early process of detection. This effect is ‘colour blind’, in the sense that group membership effects are inde- pendent of general effects of image hue. Moreover, it can be captured by tuning visual templates to reflect the statistics of observers' social experience. We conclude that group bias in face detection is both a visual and a social phenomenon

    Facial Feature Tracking and Occlusion Recovery in American Sign Language

    Full text link
    Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.National Science Foundation (IIS-0329009, IIS-0093367, IIS-9912573, EIA-0202067, EIA-9809340

    Likelihood Ratio-Based Detection of Facial Features

    Get PDF
    One of the first steps in face recognition, after image acquisition, is registration. A simple but effective technique of registration is to align facial features, such as eyes, nose and mouth, as well as possible to a standard face. This requires an accurate automatic estimate of the locations of those features. This contribution proposes a method for estimating the locations of facial features based on likelihood ratio-based detection. A post-processing step that evaluates the topology of the facial features is added to reduce the number of false detections. Although the individual detectors only have a reasonable performance (equal error rates range from 3.3% for the eyes to 1.0% for the nose), the positions of the facial features are estimated correctly in 95% of the face images
    corecore