13,080 research outputs found

    Age Estimation Based on Face Images and Pre-trained Convolutional Neural Networks

    Get PDF
    Age estimation based on face images plays an important role in a wide range of scenarios, including security and defense applications, border control, human-machine interaction in ambient intelligence applications, and recognition based on soft biometric information. Recent methods based on deep learning have shown promising performance in this field. Most of these methods use deep networks specifically designed and trained to cope with this problem. There are also some studies that focus on applying deep networks pre-trained for face recognition, which perform a fine-tuning to achieve accurate results. Differently, in this paper, we propose a preliminary study on increasing the performance of pre-trained deep networks by applying postprocessing strategies. The main advantage with respect to finetuning strategies consists of the simplicity and low computational cost of the post-processing step. To the best of our knowledge, this paper is the first study on age estimation that proposes the use of post-processing strategies for features extracted using pretrained deep networks. Our method exploits a set of pre-trained Convolutional Neural Networks (CNNs) to extract features from the input face image. The method then performs a feature level fusion, reduces the dimensionality of the feature space, and estimates the age of the individual by using a Feed-Forward Neural Network (FFNN). We evaluated the performance of our method on a public dataset (Adience Benchmark of Unfiltered Faces for Gender and Age Classification) and on a dataset of nonideal samples affected by controlled rotations, which we collected in our laboratory. Our age estimation method obtained better or comparable results with respect to state-of-the-art techniques and achieved satisfactory performance in non-ideal conditions. Results also showed that CNNs trained on general datasets can obtain satisfactory accuracy for different types of validation images, also without applying fine-tuning methods

    Automatic Age Estimation From Real-World And Wild Face Images By Using Deep Neural Networks

    Get PDF
    Automatic age estimation from real-world and wild face images is a challenging task and has an increasing importance due to its wide range of applications in current and future lifestyles. As a result of increasing age specific human-computer interactions, it is expected that computerized systems should be capable of estimating the age from face images and respond accordingly. Over the past decade, many research studies have been conducted on automatic age estimation from face images. In this research, new approaches for enhancing age classification of a person from face images based on deep neural networks (DNNs) are proposed. The work shows that pre-trained CNNs which were trained on large benchmarks for different purposes can be retrained and fine-tuned for age estimation from unconstrained face images. Furthermore, an algorithm to reduce the dimension of the output of the last convolutional layer in pre-trained CNNs to improve the performance is developed. Moreover, two new jointly fine-tuned DNNs frameworks are proposed. The first framework fine-tunes tow DNNs with two different feature sets based on the element-wise summation of their last hidden layer outputs. While the second framework fine-tunes two DNNs based on a new cost function. For both frameworks, each has two DNNs, the first DNN is trained by using facial appearance features that are extracted by a well-trained model on face recognition, while the second DNN is trained on features that are based on the superpixels depth and their relationships. Furthermore, a new method for selecting robust features based on the power of DNN and ??21-norm is proposed. This method is mainly based on a new cost function relating the DNN and the L21 norm in one unified framework. To learn and train this unified framework, the analysis and the proof for the convergence of the new objective function to solve minimization problem are studied. Finally, the performance of the proposed jointly fine-tuned networks and the proposed robust features are used to improve the age estimation from the facial images. The facial features concatenated with their corresponding robust features are fed to the first part of both networks and the superpixels features concatenated with their robust features are fed to the second part of the network. Experimental results on a public database show the effectiveness of the proposed methods and achieved the state-of-art performance on a public database

    Understanding and Comparing Deep Neural Networks for Age and Gender Classification

    Full text link
    Recently, deep neural networks have demonstrated excellent performances in recognizing the age and gender on human face images. However, these models were applied in a black-box manner with no information provided about which facial features are actually used for prediction and how these features depend on image preprocessing, model initialization and architecture choice. We present a study investigating these different effects. In detail, our work compares four popular neural network architectures, studies the effect of pretraining, evaluates the robustness of the considered alignment preprocessings via cross-method test set swapping and intuitively visualizes the model's prediction strategies in given preprocessing conditions using the recent Layer-wise Relevance Propagation (LRP) algorithm. Our evaluations on the challenging Adience benchmark show that suitable parameter initialization leads to a holistic perception of the input, compensating artefactual data representations. With a combination of simple preprocessing steps, we reach state of the art performance in gender recognition.Comment: 8 pages, 5 figures, 5 tables. Presented at ICCV 2017 Workshop: 7th IEEE International Workshop on Analysis and Modeling of Faces and Gesture

    Multi-view Face Detection Using Deep Convolutional Neural Networks

    Full text link
    In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose/landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between dis- tribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed methods performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.Comment: in International Conference on Multimedia Retrieval 2015 (ICMR
    • …
    corecore