133 research outputs found
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Recommended from our members
Image based human body rendering via regression & MRF energy minimization
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A machine learning method for synthesising human images is explored to create new images without relying on 3D modelling. Machine learning allows the creation of new images through prediction from existing data based on the use of training images. In the present study, image synthesis is performed at two levels: contour and pixel. A class of learning-based methods is formulated to create object contours from the training image for the synthetic image that allow pixel synthesis within the contours in the second level. The methods rely on applying robust object descriptions, dynamic learning models after appropriate motion segmentation, and machine learning-based frameworks.
Image-based human image synthesis using machine learning is a research focus that has recently gained considerable attention in the field of computer graphics. It makes use of techniques from image/motion analysis in computer vision. The problem lies in the estimation of methods for image-based object configuration (i.e. segmentation, contour outline). Using the results of these analysis methods as bases, the research adopts the machine learning approach, in which human images are synthesised by executing the synthesis of contour and pixels through the learning from training image.
Firstly, thesis shows how an accurate silhouette is distilled using developed background subtraction for accuracy and efficiency. The traditional vector machine approach is used to avoid ambiguities within the regression process. Images can be represented as a class of accurate and efficient vectors for single images as well as sequences. Secondly, the framework is explored using a unique view of machine learning methods, i.e., support vector regression (SVR), to obtain the convergence result of vectors for contour allocation. The changing relationship between the synthetic image and the training image is expressed as a vector and represented in functions. Finally, a pixel synthesis is performed based on belief propagation.
This thesis proposes a novel image-based rendering method for colour image synthesis using SVR and belief propagation for generalisation to enable the prediction of contour and colour information from input colour images. The methods rely on using appropriately defined and robust input colour images, optimising the input contour images within a sparse SVR framework. Firstly, the thesis shows how contour can effectively and efficiently be predicted from small numbers of input contour images. In addition, the thesis exploits the sparse properties of SVR efficiency, and makes use of SVR to estimate regression function. The image-based rendering method employed in this study enables contour synthesis for the prediction of small numbers of input source images. This procedure avoids the use of complex models and geometry information. Secondly, the method used for human body contour colouring is extended to define eight differently connected pixels, and construct a link distance field via the belief propagation method. The link distance, which acts as the message in propagation, is transformed by improving the low-envelope method in fast distance transform. Finally, the methodology is tested by considering human facial and human body clothing information. The accuracy of the test results for the human body model confirms the efficiency of the proposed method
Ubiquitous Technologies for Emotion Recognition
Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions
{3D} Morphable Face Models -- Past, Present and Future
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications
Recommended from our members
Developing a Computer System for the Generation of Unique Wrinkle Maps for Human Faces. Generating 2D Wrinkle Maps using Various Image Processing Techniques and the Design of 3D Facial Ageing System using 3D Modelling Tools.
Facial Ageing (FA) is a very fundamental issue, as ageing in general, is part of our daily life process. FA is used in security, finding missing children and other applications. It is also a form of Facial Recognition (FR) that helps identifying suspects. FA affects several parts of the human face under the influence of different biological and environmental factors. One of the major facial feature changes that occur as a result of ageing is the appearance and development of wrinkles. Facial wrinkles are skin folds; their shapes and numbers differ from one person to another, therefore, an advantage can be taken over these characteristics if a system is implemented to extract the facial wrinkles in a form of maps.
This thesis is presenting a new technique for three-dimensional facial wrinkle pattern information that can also be utilised for biometric applications, which will back up the system for further increase of security. The procedural approaches adopted for investigating this new technique are the extraction of two-dimensional wrinkle maps of frontal human faces for digital images and the design of three-dimensional wrinkle pattern formation system that utilises the generated wrinkle maps.
The first approach is carried out using image processing tools so that for any given individual, two wrinkle maps are produced; the first map is in a binary form that shows the positions of the wrinkles on the face while the other map is a coloured version that indicates the different intensities of the wrinkles.
The second approach of the 3D system development involves the alignment of the binary wrinkle maps on the corresponding 3D face models, followed by the projection of 3D curves in order to acquire 3D representations of the wrinkles. With the aid of the coloured wrinkle maps as well as some ageing parameters, simulations and predictions for the 3D wrinkles are performed
Image based human body rendering via regression & MRF energy minimization
A machine learning method for synthesising human images is explored to create new images without relying on 3D modelling. Machine learning allows the creation of new images through prediction from existing data based on the use of training images. In the present study, image synthesis is performed at two levels: contour and pixel. A class of learning-based methods is formulated to create object contours from the training image for the synthetic image that allow pixel synthesis within the contours in the second level. The methods rely on applying robust object descriptions, dynamic learning models after appropriate motion segmentation, and machine learning-based frameworks. Image-based human image synthesis using machine learning is a research focus that has recently gained considerable attention in the field of computer graphics. It makes use of techniques from image/motion analysis in computer vision. The problem lies in the estimation of methods for image-based object configuration (i.e. segmentation, contour outline). Using the results of these analysis methods as bases, the research adopts the machine learning approach, in which human images are synthesised by executing the synthesis of contour and pixels through the learning from training image. Firstly, thesis shows how an accurate silhouette is distilled using developed background subtraction for accuracy and efficiency. The traditional vector machine approach is used to avoid ambiguities within the regression process. Images can be represented as a class of accurate and efficient vectors for single images as well as sequences. Secondly, the framework is explored using a unique view of machine learning methods, i.e., support vector regression (SVR), to obtain the convergence result of vectors for contour allocation. The changing relationship between the synthetic image and the training image is expressed as a vector and represented in functions. Finally, a pixel synthesis is performed based on belief propagation. This thesis proposes a novel image-based rendering method for colour image synthesis using SVR and belief propagation for generalisation to enable the prediction of contour and colour information from input colour images. The methods rely on using appropriately defined and robust input colour images, optimising the input contour images within a sparse SVR framework. Firstly, the thesis shows how contour can effectively and efficiently be predicted from small numbers of input contour images. In addition, the thesis exploits the sparse properties of SVR efficiency, and makes use of SVR to estimate regression function. The image-based rendering method employed in this study enables contour synthesis for the prediction of small numbers of input source images. This procedure avoids the use of complex models and geometry information. Secondly, the method used for human body contour colouring is extended to define eight differently connected pixels, and construct a link distance field via the belief propagation method. The link distance, which acts as the message in propagation, is transformed by improving the low-envelope method in fast distance transform. Finally, the methodology is tested by considering human facial and human body clothing information. The accuracy of the test results for the human body model confirms the efficiency of the proposed method.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Automatic facial age estimation
The reliability of automatically estimating human ages, by processing input facial images, has generally been found to be poor. On other hand, various real world applications, often relating to safety and security, depend on an accurate estimate of a person’s age. In such situations, Face Image based Automatic Age Estimation (FI-AAE) systems which are more reliable and may ideally surpass human ability, are of importance as and represent a critical pre-requisite technology. Unfortunately, in terms of estimation accuracy and thus performance, contemporary FI-AAE systems are impeded by challenges which exist in both of the two major FI-AAE processing phases i.e. i) Age based feature extraction and representation and ii) Age group classification. Challenges in the former phase arise because facial shape and texture change independently and the magnitude of these changes vary during the different stages of a person’s life. Additionally, contemporary schemes struggle to exploit age group specific characteristics of these features, which in turn has a detrimental effect on overall system performance. Furthermore misclassification errors which occur in the second processing phase and are caused by the smooth inter-class variations often observed between adjacent age groups, pose another major challenge and are responsible for low overall FI-AAE performance. In this thesis a novel Multi-Level Age Estimation (ML-AE) framework is proposed that addresses the aforementioned challenges and improves upon state-of-the-art FI-AAE system performance. The proposed ML-AE is a hierarchical classification scheme that maximizes and then exploits inter-class variation among different age groups at each level of the hierarchy. Furthermore, the proposed scheme exploits age based discriminating information taken from two different cues (i.e. facial shape and texture) at the decision level which improves age estimation results. During the process of achieving our main objective of age estimation, this research work also contributes to two associated image processing/analysis areas: i) Face image modeling and synthesis; a process of representing face image data with a low dimensionality set of parameters. This is considered as precursor to every face image based age estimation system and has been studied in this thesis within the context of image face recognition ii) measuring face image data variability that can help in representing/ranking different face image datasets according to their classification difficulty level. Thus a variability measure is proposed that can also be used to predict the classification performance of a given face recognition system operating upon a particular input face dataset. Experimental results based on well-known face image datasets revealed the superior performance of our proposed face analysis, synthesis and face image based age classification methodologies, as compared to that obtained from conventional schemes
- …