45,012 research outputs found

    Restricted Boltzmann Machines for Gender Classification

    Full text link
    This paper deals with automatic feature learning using a generative model called Restricted Boltzmann Machine (RBM) for the problem of gender recognition in face images. The RBM is presented together with some practical learning tricks to improve the learning capabilities and speedup the training process. The performance of the features obtained is compared against several linear methods using the same dataset and the same evaluation protocol. The results show a classification accuracy improvement compared with classical linear projection methods. Moreover, in order to increase even more the classification accuracy, we have run some experiments where an SVM is fed with the non-linear mapping obtained by the RBM in a tandem configuration.Mansanet Sandin, J.; Albiol Colomer, A.; Paredes Palacios, R.; Villegas, M.; Albiol Colomer, AJ. (2014). Restricted Boltzmann Machines for Gender Classification. Lecture Notes in Computer Science. 8814:274-281. doi:10.1007/978-3-319-11758-4_30S2742818814Bengio, Y., Courville, A., Vincent, P.: Representation learning: A review and new perspectives. IEEE Trans. on PAMI 35(8), 1798–1828 (2013)Bressan, M., Vitrià, J.: Nonparametric discriminant analysis and nearest neighbor classification. Pattern Recognition Letters 24(15), 2743–2749 (2003)Buchala, S., et al.: Dimensionality reduction of face images for gender classification. In: Proceedings of the Intelligent Systems, vol. 1, pp. 88–93 (2004)Cai, D., He, X., Hu, Y., Han, J., Huang, T.: Learning a spatially smooth subspace for face recognition. In: CVPR, pp. 1–7 (2007)Courville, A., Bergstra, J., Bengio, Y.: Unsupervised models of images by spike-and-slab rbms. In: ICML, pp. 1145–1152 (2011)Huang, G.B., et al.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07–49, Univ. of Massachusetts (October 2007)Schmah, T., et al.: Generative versus discriminative training of rbms for classification of fmri images. In: NIPS, pp. 1409–1416 (2008)Graf, A.B.A., Wichmann, F.A.: Gender classification of human faces. In: Bülthoff, H.H., Lee, S.-W., Poggio, T.A., Wallraven, C. (eds.) BMCV 2002. LNCS, vol. 2525, pp. 491–500. Springer, Heidelberg (2002)He, X., Niyogi, P.: Locality preserving projections. In: NIPS (2004)Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)Hinton, G.E.: A practical guide to training restricted boltzmann machines. Technical report, University of Toronto (2010)Hinton, G.E., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)Moghaddam, B., Yang, M.-H.: Learning gender with support faces. IEEE Trans. on PAMI 24(5), 707–711 (2002)Nair, V., Hinton, G.E.: 3d object recognition with deep belief nets. In: NIPS, pp. 1339–1347 (2009)Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted boltzmann machines for collaborative filtering. In: ICML, pp. 791–798 (2007)Shan, C.: Learning local binary patterns for gender classification on real-world face images. Pattern Recognition Letters 33(4), 431–437 (2012)Shobeirinejad, A., Gao, Y.: Gender classification using interlaced derivative patterns. In: ICPR, pp. 1509–1512 (2010)Villegas, M., Paredes, R.: Dimensionality reduction by minimizing nearest-neighbor classification error. Pattern Recognition Letters 32(4), 633–639 (2011

    Understanding critical factors in gender recognition

    Get PDF
    Gender classification is a task of paramount importance in face recognition research, and it is potentially useful in a large set of applications. In this paper we investigate the gender classification problem by an extended empirical analysis on the Face Recognition Grand Challenge version 2.0 dataset (FRGC2.0). We propose challenging experimental protocols over the dimensions of FRGC2.0 – i.e., subject, face expression, race, controlled or uncontrolled environment. We evaluate our protocols with respect to several classification algorithms, and processing different types of features, like Gabor and LBP. Our results show that gender classification is independent from factors like the race of the subject, face expressions, and variations of controlled illumination conditions. We also report that Gabor features seem to be more robust than LBPs in the case of uncontrolled environment

    Smile detection in the wild based on transfer learning

    Full text link
    Smile detection from unconstrained facial images is a specialized and challenging problem. As one of the most informative expressions, smiles convey basic underlying emotions, such as happiness and satisfaction, which lead to multiple applications, e.g., human behavior analysis and interactive controlling. Compared to the size of databases for face recognition, far less labeled data is available for training smile detection systems. To leverage the large amount of labeled data from face recognition datasets and to alleviate overfitting on smile detection, an efficient transfer learning-based smile detection approach is proposed in this paper. Unlike previous works which use either hand-engineered features or train deep convolutional networks from scratch, a well-trained deep face recognition model is explored and fine-tuned for smile detection in the wild. Three different models are built as a result of fine-tuning the face recognition model with different inputs, including aligned, unaligned and grayscale images generated from the GENKI-4K dataset. Experiments show that the proposed approach achieves improved state-of-the-art performance. Robustness of the model to noise and blur artifacts is also evaluated in this paper

    Improving face gender classification by adding deliberately misaligned faces to the training data

    Get PDF
    A novel method of face gender classifier construction is proposed and evaluated. Previously, researchers have assumed that a computationally expensive face alignment step (in which the face image is transformed so that facial landmarks such as the eyes, nose, chin, etc, are in uniform locations in the image) is required in order to maximize the accuracy of predictions on new face images. We, however, argue that this step is not necessary, and that machine learning classifiers can be made robust to face misalignments by automatically expanding the training data with examples of faces that have been deliberately misaligned (for example, translated or rotated). To test our hypothesis, we evaluate this automatic training dataset expansion method with two types of image classifier, the first based on weak features such as Local Binary Pattern histograms, and the second based on SIFT keypoints. Using a benchmark face gender classification dataset recently proposed in the literature, we obtain a state-of-the-art accuracy of 92.5%, thus validating our approach
    corecore