34,878 research outputs found

    Texture features extraction based on GLCM for face retrieval system

    Get PDF
    Texture features play an important role in most image retrieval techniques to obtain results of high accuracy. In this work, the face image retrieval method considering texture analysis and statistical features has been proposed. Textile features can also be extracted using the GLCM tool. In this research, the GLCM calculation method involves two phases, first: some of the previous image processing techniques work together to get the best results to determine the big object of the face image (center of face image) then, the gray level co-occurrence matrix GLCM is computed for gray face image and then some statistical texture features with second-order are extracted. In the second phase, the facial texture features are retrieved by finding the minimum distance between texture features of an unknown face image with the texture features of face images that are stored in the database system. The experimental results show that the proposed method is capable to achieve high accuracy degree in face image retrieval

    Modeling of Facial Wrinkles for Applications in Computer Vision

    Get PDF
    International audienceAnalysis and modeling of aging human faces have been extensively studied in the past decade for applications in computer vision such as age estimation, age progression and face recognition across aging. Most of this research work is based on facial appearance and facial features such as face shape, geometry, location of landmarks and patch-based texture features. Despite the recent availability of higher resolution, high quality facial images, we do not find much work on the image analysis of local facial features such as wrinkles specifically. For the most part, modeling of facial skin texture, fine lines and wrinkles has been a focus in computer graphics research for photo-realistic rendering applications. In computer vision, very few aging related applications focus on such facial features. Where several survey papers can be found on facial aging analysis in computer vision, this chapter focuses specifically on the analysis of facial wrinkles in the context of several applications. Facial wrinkles can be categorized as subtle discontinuities or cracks in surrounding inhomogeneous skin texture and pose challenges to being detected/localized in images. First, we review commonly used image features to capture the intensity gradients caused by facial wrinkles and then present research in modeling and analysis of facial wrinkles as aging texture or curvilinear objects for different applications. The reviewed applications include localization or detection of wrinkles in facial images , incorporation of wrinkles for more realistic age progression, analysis for age estimation and inpainting/removal of wrinkles for facial retouching

    Local Higher-Order Statistics (LHS) describing images with statistics of local non-binarized pixel patterns

    Get PDF
    Accepted for publication in International Journal of Computer Vision and Image Understanding (CVIU)International audienceWe propose a new image representation for texture categorization and facial analysis, relying on the use of higher-order local differential statistics as features. It has been recently shown that small local pixel pattern distributions can be highly discriminative while being extremely efficient to compute, which is in contrast to the models based on the global structure of images. Motivated by such works, we propose to use higher-order statistics of local non-binarized pixel patterns for the image description. The proposed model does not require either (i) user specified quantization of the space (of pixel patterns) or (ii) any heuristics for discarding low occupancy volumes of the space. We propose to use a data driven soft quantization of the space, with parametric mixture models, combined with higher-order statistics, based on Fisher scores. We demonstrate that this leads to a more expressive representation which, when combined with discriminatively learned classifiers and metrics, achieves state-of-the-art performance on challenging texture and facial analysis datasets, in low complexity setup. Further, it is complementary to higher complexity features and when combined with them improves performance

    On the quantitative estimation of short-term aging in human faces

    Get PDF
    Facial aging has been only partially studied in the past and mostly in a qualitative way. This paper presents a novel approach to the estimation of facial aging aimed to the quantitative evaluation of the changes in facial appearance over time. In particular, the changes both in face shape and texture, due to short-time aging, are considered. The developed framework exploits the concept of “distinctiveness” of facial features and the temporal evolution of such measure. The analysis is performed both at a global and local level to define the features which are more stable over time. Several experiments are performed on publicly available databases with image sequences densely sampled over a time span of several years. The reported results clearly show the potential of the methodology to a number of applications in biometric identification from human faces

    Local Binary Patterns Calculated Over Gaussian Derivative Images

    Get PDF
    International audienceIn this paper we present a new static descriptor for facial image analysis. We combine Gaussian derivatives with Local Binary Patterns to provide a robust and powerful descriptor especially suited to extracting texture from facial images. Gaussian features in the form of image derivatives form the input to the Linear Binary Pattern(LBP) operator instead of the original image. The proposed descriptor is tested for face recognition and smile detection. For face recognition we use the CMU-PIE and the YaleB+extended YaleB database. Smile detection is performed on the benchmark GENKI 4k database. With minimal machine learning our descriptor outperforms the state of the art at smile detection and compares favourably with the state of the art at face recognition

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda

    Automatic 3D facial expression recognition using geometric and textured feature fusion

    Get PDF
    3D facial expression recognition has gained more and more interests from affective computing society due to issues such as pose variations and illumination changes caused by 2D imaging having been eliminated. There are many applications that can benefit from this research, such as medical applications involving the detection of pain and psychological effects in patients, in human-computer interaction tasks that intelligent systems use in today's world. In this paper, we look into 3D Facial Expression Recognition, by investigating many feature extraction methods used on the 2D textured images and 3D geometric data, fusing the 2 domains to increase the overall performance. A One Vs All Multi-class SVM Classifier has been adopted to recognize the expressions Angry, Disgust, Fear, Happy, Neutral, Sad and Surprise from the BU-3DFE and Bosphorus databases. The proposed approach displays an increase in performance when the features are fused together

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set
    • 

    corecore