5,973 research outputs found

    Multi-Expert Gender Classification on Age Group by Integrating Deep Neural Networks

    Full text link
    Generally, facial age variations affect gender classification accuracy significantly, because facial shape and skin texture change as they grow old. This requires re-examination on the gender classification system to consider facial age information. In this paper, we propose Multi-expert Gender Classification on Age Group (MGA), an end-to-end multi-task learning schemes of age estimation and gender classification. First, two types of deep neural networks are utilized; Convolutional Appearance Network (CAN) for facial appearance feature and Deep Geometry Network (DGN) for facial geometric feature. Then, CAN and DGN are integrated by the proposed model integration strategy and fine-tuned in order to improve age and gender classification accuracy. The facial images are categorized into one of three age groups (young, adult and elder group) based on their estimated age, and the system makes a gender prediction according to average fusion strategy of three gender classification experts, which are trained to fit gender characteristics of each age group. Rigorous experimental results conducted on the challenging databases suggest that the proposed MGA outperforms several state-of-art researches with smaller computational cost.Comment: 12 page

    Joint Estimation of Age and Gender from Unconstrained Face Images using Lightweight Multi-task CNN for Mobile Applications

    Full text link
    Automatic age and gender classification based on unconstrained images has become essential techniques on mobile devices. With limited computing power, how to develop a robust system becomes a challenging task. In this paper, we present an efficient convolutional neural network (CNN) called lightweight multi-task CNN for simultaneous age and gender classification. Lightweight multi-task CNN uses depthwise separable convolution to reduce the model size and save the inference time. On the public challenging Adience dataset, the accuracy of age and gender classification is better than baseline multi-task CNN methods.Comment: To publish in the IEEE first International Conference on Multimedia Information Processing and Retrieval, 2018. (IEEE MIPR 2018

    Gender Classification Using Gradient Direction Pattern

    Full text link
    A novel methodology for gender classification is presented in this paper. It extracts feature from local region of a face using gray color intensity difference. The facial area is divided into sub-regions and GDP histogram extracted from those regions are concatenated into a single vector to represent the face. The classification accuracy obtained by using support vector machine has outperformed all traditional feature descriptors for gender classification. It is evaluated on the images collected from FERET database and obtained very high accuracy.Comment: 3 pages, 5 figures, 3 tables, SCI journa

    Age and Gender Classification From Ear Images

    Full text link
    In this paper, we present a detailed analysis on extracting soft biometric traits, age and gender, from ear images. Although there have been a few previous work on gender classification using ear images, to the best of our knowledge, this study is the first work on age classification from ear images. In the study, we have utilized both geometric features and appearance-based features for ear representation. The utilized geometric features are based on eight anthropometric landmarks and consist of 14 distance measurements and two area calculations. The appearance-based methods employ deep convolutional neural networks for representation and classification. The well-known convolutional neural network models, namely, AlexNet, VGG-16, GoogLeNet, and SqueezeNet have been adopted for the study. They have been fine-tuned on a large-scale ear dataset that has been built from the profile and close-to-profile face images in the Multi-PIE face dataset. This way, we have performed a domain adaptation. The updated models have been fine-tuned once more time on the small-scale target ear dataset, which contains only around 270 ear images for training. According to the experimental results, appearance-based methods have been found to be superior to the methods based on geometric features. We have achieved 94\% accuracy for gender classification, whereas 52\% accuracy has been obtained for age classification. These results indicate that ear images provide useful cues for age and gender classification, however, further work is required for age estimation.Comment: 7 pages, 3 figures, accepted for IAPR/IEEE International Workshop on Biometrics and Forensics (IWBF) 201

    Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    Full text link
    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.Comment: 12 figures and 6 table

    Understanding Unequal Gender Classification Accuracy from Face Images

    Full text link
    Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender. Accuracy on dark-skinned females is significantly worse than on any other group. In this paper, we conduct several analyses to try to uncover the reason for this gap. The main finding, perhaps surprisingly, is that skin type is not the driver. This conclusion is reached via stability experiments that vary an image's skin type via color-theoretic methods, namely luminance mode-shift and optimal transport. A second suspect, hair length, is also shown not to be the driver via experiments on face images cropped to exclude the hair. Finally, using contrastive post-hoc explanation techniques for neural networks, we bring forth evidence suggesting that differences in lip, eye and cheek structure across ethnicity lead to the differences. Further, lip and eye makeup are seen as strong predictors for a female face, which is a troubling propagation of a gender stereotype

    Relevant features for Gender Classification in NIR Periocular Images

    Full text link
    Most gender classifications methods from NIR images have used iris information. Recent work has explored the use of the whole periocular iris region which has surprisingly achieve better results. This suggests the most relevant information for gender classification is not located in the iris as expected. In this work, we analyze and demonstrate the location of the most relevant features that describe gender in periocular NIR images and evaluate its influence its classification. Experiments show that the periocular region contains more gender information than the iris region. We extracted several features (intensity, texture, and shape) and classified them according to its relevance using the XgBoost algorithm. Support Vector Machine and nine ensemble classifiers were used for testing gender accuracy when using the most relevant features. The best classification results were obtained when 4,000 features located on the periocular region were used (89.22\%). Additional experiments with the full periocular iris images versus the iris-Occluded images were performed. The gender classification rates obtained were 84.35\% and 85.75\% respectively. We also contribute to the state of the art with a new database (UNAB-Gender). From results, we suggest focussing only on the surrounding area of the iris. This allows us to realize a faster classification of gender from NIR periocular images.Comment: 12 pages, Paper accepted by IET Biometric

    Distance weighted discrimination of face images for gender classification

    Full text link
    We illustrate the advantages of distance weighted discrimination for classification and feature extraction in a High Dimension Low Sample Size (HDLSS) situation. The HDLSS context is a gender classification problem of face images in which the dimension of the data is several orders of magnitude larger than the sample size. We compare distance weighted discrimination with Fisher's linear discriminant, support vector machines, and principal component analysis by exploring their classification interpretation through insightful visuanimations and by examining the classifiers' discriminant errors. This analysis enables us to make new contributions to the understanding of the drivers of human discrimination between males and females.Comment: 9 pages, 4 figures, 1 tabl

    Local Deep Neural Networks for Age and Gender Classification

    Full text link
    Local deep neural networks have been recently introduced for gender recognition. Although, they achieve very good performance they are very computationally expensive to train. In this work, we introduce a simplified version of local deep neural networks which significantly reduces the training time. Instead of using hundreds of patches per image, as suggested by the original method, we propose to use 9 overlapping patches per image which cover the entire face region. This results in a much reduced training time, since just 9 patches are extracted per image instead of hundreds, at the expense of a slightly reduced performance. We tested the proposed modified local deep neural networks approach on the LFW and Adience databases for the task of gender and age classification. For both tasks and both databases the performance is up to 1% lower compared to the original version of the algorithm. We have also investigated which patches are more discriminative for age and gender classification. It turns out that the mouth and eyes regions are useful for age classification, whereas just the eye region is useful for gender classification

    Efficient Gender Classification Using a Deep LDA-Pruned Net

    Full text link
    Many real-time tasks, such as human-computer interaction, require fast and efficient facial gender classification. Although deep CNN nets have been very effective for a multitude of classification tasks, their high space and time demands make them impractical for personal computers and mobile devices without a powerful GPU. In this paper, we develop a 16-layer, yet lightweight, neural network which boosts efficiency while maintaining high accuracy. Our net is pruned from the VGG-16 model starting from the last convolutional (conv) layer where we find neuron activations are highly uncorrelated given the gender. Through Fisher's Linear Discriminant Analysis (LDA), we show that this high decorrelation makes it safe to discard directly last conv layer neurons with high within-class variance and low between-class variance. Combined with either Support Vector Machines (SVM) or Bayesian classification, the reduced CNNs are capable of achieving comparable (or even higher) accuracies on the LFW and CelebA datasets than the original net with fully connected layers. On LFW, only four Conv5_3 neurons are able to maintain a comparably high recognition accuracy, which results in a reduction of total network size by a factor of 70X with a 11 fold speedup. Comparisons with a state-of-the-art pruning method as well as two smaller nets in terms of accuracy loss and convolutional layers pruning rate are also provided.Comment: The only difference with the previous version v2 is the title on the arxiv page. I am changing it back to the original title in v1 because otherwise google scholar cannot track the citations to this arxiv paper correctly. You could cite either the conference version or this arxiv version. They are equivalen
    corecore