228,095 research outputs found

    Face Attribute Prediction Using Off-the-Shelf CNN Features

    Full text link
    Predicting attributes from face images in the wild is a challenging computer vision problem. To automatically describe face attributes from face containing images, traditionally one needs to cascade three technical blocks --- face localization, facial descriptor construction, and attribute classification --- in a pipeline. As a typical classification problem, face attribute prediction has been addressed using deep learning. Current state-of-the-art performance was achieved by using two cascaded Convolutional Neural Networks (CNNs), which were specifically trained to learn face localization and attribute description. In this paper, we experiment with an alternative way of employing the power of deep representations from CNNs. Combining with conventional face localization techniques, we use off-the-shelf architectures trained for face recognition to build facial descriptors. Recognizing that the describable face attributes are diverse, our face descriptors are constructed from different levels of the CNNs for different attributes to best facilitate face attribute prediction. Experiments on two large datasets, LFWA and CelebA, show that our approach is entirely comparable to the state-of-the-art. Our findings not only demonstrate an efficient face attribute prediction approach, but also raise an important question: how to leverage the power of off-the-shelf CNN representations for novel tasks.Comment: In proceeding of 2016 International Conference on Biometrics (ICB

    Deep Learning Face Attributes in the Wild

    Full text link
    Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.Comment: To appear in International Conference on Computer Vision (ICCV) 201

    A hybrid technique for face detection in color images

    Get PDF
    In this paper, a hybrid technique for face detection in color images is presented. The proposed technique combines three analysis models, namely skin detection, automatic eye localization, and appearance-based face/nonface classification. Using a robust histogram-based skin detection model, skin-like pixels are first identified in the RGB color space. Based on this, face bounding-boxes are extracted from the image. On detecting a face bounding-box, approximate positions of the candidate mouth feature points are identified using the redness property of image pixels. A region-based eye localization step, based on the detected mouth feature points, is then applied to face bounding-boxes to locate possible eye feature points in the image. Based on the distance between the detected eye feature points, face/non-face classification is performed over a normalized search area using the Bayesian discriminating feature (BDF) analysis method. Some subjective evaluation results are presented on images taken using digital cameras and a Webcam, representing both indoor and outdoor scenes

    POSTER: Privacy-preserving Indoor Localization

    Full text link
    Upcoming WiFi-based localization systems for indoor environments face a conflict of privacy interests: Server-side localization violates location privacy of the users, while localization on the user's device forces the localization provider to disclose the details of the system, e.g., sophisticated classification models. We show how Secure Two-Party Computation can be used to reconcile privacy interests in a state-of-the-art localization system. Our approach provides strong privacy guarantees for all involved parties, while achieving room-level localization accuracy at reasonable overheads.Comment: Poster Session of the 7th ACM Conference on Security & Privacy in Wireless and Mobile Networks (WiSec'14

    Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition

    Full text link
    Two approaches are proposed for cross-pose face recognition, one is based on the 3D reconstruction of facial components and the other is based on the deep Convolutional Neural Network (CNN). Unlike most 3D approaches that consider holistic faces, the proposed approach considers 3D facial components. It segments a 2D gallery face into components, reconstructs the 3D surface for each component, and recognizes a probe face by component features. The segmentation is based on the landmarks located by a hierarchical algorithm that combines the Faster R-CNN for face detection and the Reduced Tree Structured Model for landmark localization. The core part of the CNN-based approach is a revised VGG network. We study the performances with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. We investigate the performances of the network when it is employed as a classifier or designed as a feature extractor. The two recognition approaches and the fast landmark localization are evaluated in extensive experiments, and compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table
    corecore