19 research outputs found

    Face analysis using curve edge maps

    Get PDF
    This paper proposes an automatic and real-time system for face analysis, usable in visual communication applications. In this approach, faces are represented with Curve Edge Maps, which are collections of polynomial segments with a convex region. The segments are extracted from edge pixels using an adaptive incremental linear-time fitting algorithm, which is based on constructive polynomial fitting. The face analysis system considers face tracking, face recognition and facial feature detection, using Curve Edge Maps driven by histograms of intensities and histograms of relative positions. When applied to different face databases and video sequences, the average face recognition rate is 95.51%, the average facial feature detection rate is 91.92% and the accuracy in location of the facial features is 2.18% in terms of the size of the face, which is comparable with or better than the results in literature. However, our method has the advantages of simplicity, real-time performance and extensibility to the different aspects of face analysis, such as recognition of facial expressions and talking

    Face alignment using local hough voting

    Full text link
    Abstract — We present a novel Hough voting-based method to improve the efficiency and accuracy of fiducial points localization, which can be conveniently integrated with any global prior model for final face alignment. Specifically, two or more stable facial components (e.g., eyes) are first localized and fixed as anchor points, based on which a separate local voting map is constructed for each fiducial point using kernel density estimation. The voting map allows us to effectively constrain the search region of fiducial points by exploiting the local spatial constraints imposed by it. In addition, a multi-output ridge regression method is adopted to align the voting map and the response map of local detectors to the ground truth map, and the learned transformations are then exploited to further increases the robustness of the algorithm against various appearance variations. Encouraging experimental results are given on several publicly available face databases. I

    Hierarchical face parsing via deep learning

    Full text link
    This paper investigates how to parse (segment) facial components from face images which may be partially oc-cluded. We propose a novel face parser, which recasts segmentation of face components as a cross-modality data transformation problem, i.e., transforming an image patch to a label map. Specifically, a face is represented hierarchi-cally by parts, components, and pixel-wise labels. With this representation, our approach first detects faces at both the part- and component-levels, and then computes the pixel-wise label maps (Fig.1). Our part-based and component-based detectors are generatively trained with the deep belief network (DBN), and are discriminatively tuned by logistic regression. The segmentators transform the detected face components to label maps, which are obtained by learning a highly nonlinear mapping with the deep autoencoder. The proposed hierarchical face parsing is not only robust to par-tial occlusions but also provide richer information for face analysis and face synthesis compared with face keypoint de-tection and face alignment. The effectiveness of our algo-rithm is shown through several tasks on 2, 239 images se-lected from three datasets (e.g., LFW [12], BioID [13] and CUFSF [29]). 1
    corecore