15,378 research outputs found
On automatic age estimation from facial profile view
YesIn recent years, automatic facial age estimation has gained popularity due to its numerous applications. Much work has been done on frontal images and lately, minimal estimation errors have been achieved on most of the benchmark databases. However, in reality, images obtained in unconstrained environments are not always frontal. For instance, when conducting a demographic study or crowd analysis, one may get profile images of the face. To the best of our knowledge, no attempt has been made to estimate ages from the side-view of face images. Here we exploit this by using a pre-trained deep residual neural network (ResNet) to extract features. We then utilize a sparse partial least squares regression approach to estimate ages. Despite having less information as compared to frontal images, our results show that the extracted deep features achieve a promising performance
Machine Analysis of Facial Expressions
No abstract
Facial Asymmetry Analysis Based on 3-D Dynamic Scans
Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Some like it hot - visual guidance for preference prediction
For people first impressions of someone are of determining importance. They
are hard to alter through further information. This begs the question if a
computer can reach the same judgement. Earlier research has already pointed out
that age, gender, and average attractiveness can be estimated with reasonable
precision. We improve the state-of-the-art, but also predict - based on
someone's known preferences - how much that particular person is attracted to a
novel face. Our computational pipeline comprises a face detector, convolutional
neural networks for the extraction of deep features, standard support vector
regression for gender, age and facial beauty, and - as the main novelties -
visual regularized collaborative filtering to infer inter-person preferences as
well as a novel regression technique for handling visual queries without rating
history. We validate the method using a very large dataset from a dating site
as well as images from celebrities. Our experiments yield convincing results,
i.e. we predict 76% of the ratings correctly solely based on an image, and
reveal some sociologically relevant conclusions. We also validate our
collaborative filtering solution on the standard MovieLens rating dataset,
augmented with movie posters, to predict an individual's movie rating. We
demonstrate our algorithms on howhot.io which went viral around the Internet
with more than 50 million pictures evaluated in the first month.Comment: accepted for publication at CVPR 201
- …