85,398 research outputs found
Subjectivity and complexity of facial attractiveness
The origin and meaning of facial beauty represent a longstanding puzzle.
Despite the profuse literature devoted to facial attractiveness, its very
nature, its determinants and the nature of inter-person differences remain
controversial issues. Here we tackle such questions proposing a novel
experimental approach in which human subjects, instead of rating natural faces,
are allowed to efficiently explore the face-space and 'sculpt' their favorite
variation of a reference facial image. The results reveal that different
subjects prefer distinguishable regions of the face-space, highlighting the
essential subjectivity of the phenomenon.The different sculpted facial vectors
exhibit strong correlations among pairs of facial distances, characterising the
underlying universality and complexity of the cognitive processes, and the
relative relevance and robustness of the different facial distances.Comment: 15 pages, 5 figures. Supplementary information: 26 pages, 13 figure
Learning Social Relation Traits from Face Images
Social relation defines the association, e.g, warm, friendliness, and
dominance, between two or more people. Motivated by psychological studies, we
investigate if such fine-grained and high-level relation traits can be
characterised and quantified from face images in the wild. To address this
challenging problem we propose a deep model that learns a rich face
representation to capture gender, expression, head pose, and age-related
attributes, and then performs pairwise-face reasoning for relation prediction.
To learn from heterogeneous attribute sources, we formulate a new network
architecture with a bridging layer to leverage the inherent correspondences
among these datasets. It can also cope with missing target attribute labels.
Extensive experiments show that our approach is effective for fine-grained
social relation learning in images and videos.Comment: To appear in International Conference on Computer Vision (ICCV) 201
Generating 3D faces using Convolutional Mesh Autoencoders
Learned 3D representations of human faces are useful for computer vision
problems such as 3D face tracking and reconstruction from images, as well as
graphics applications such as character generation and animation. Traditional
models learn a latent representation of a face using linear subspaces or
higher-order tensor generalizations. Due to this linearity, they can not
capture extreme deformations and non-linear expressions. To address this, we
introduce a versatile model that learns a non-linear representation of a face
using spectral convolutions on a mesh surface. We introduce mesh sampling
operations that enable a hierarchical mesh representation that captures
non-linear variations in shape and expression at multiple scales within the
model. In a variational setting, our model samples diverse realistic 3D faces
from a multivariate Gaussian distribution. Our training data consists of 20,466
meshes of extreme expressions captured over 12 different subjects. Despite
limited training data, our trained model outperforms state-of-the-art face
models with 50% lower reconstruction error, while using 75% fewer parameters.
We also show that, replacing the expression space of an existing
state-of-the-art face model with our autoencoder, achieves a lower
reconstruction error. Our data, model and code are available at
http://github.com/anuragranj/com
The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism
Computer vision and other biometrics data science applications have commenced
a new project of profiling people. Rather than using 'transaction generated
information', these systems measure the 'real world' and produce an assessment
of the 'world state' - in this case an assessment of some individual trait.
Instead of using proxies or scores to evaluate people, they increasingly deploy
a logic of revealing the truth about reality and the people within it. While
these profiling knowledge claims are sometimes tentative, they increasingly
suggest that only through computation can these excesses of reality be captured
and understood. This article explores the bases of those claims in the systems
of measurement, representation, and classification deployed in computer vision.
It asks if there is something new in this type of knowledge claim, sketches an
account of a new form of computational empiricism being operationalised, and
questions what kind of human subject is being constructed by these
technological systems and practices. Finally, the article explores legal
mechanisms for contesting the emergence of computational empiricism as the
dominant knowledge platform for understanding the world and the people within
it
The Interaction of Genetic Background and Mutational Effects in Regulation of Mouse Craniofacial Shape.
Inbred genetic background significantly influences the expression of phenotypes associated with known genetic perturbations and can underlie variation in disease severity between individuals with the same mutation. However, the effect of epistatic interactions on the development of complex traits, such as craniofacial morphology, is poorly understood. Here, we investigated the effect of three inbred backgrounds (129X1/SvJ, C57BL/6J, and FVB/NJ) on the expression of craniofacial dysmorphology in mice (Mus musculus) with loss of function in three members of the Sprouty family of growth factor negative regulators (Spry1, Spry2, or Spry4) in order to explore the impact of epistatic interactions on skull morphology. We found that the interaction of inbred background and the Sprouty genotype explains as much craniofacial shape variation as the Sprouty genotype alone. The most severely affected genotypes display a relatively short and wide skull, a rounded cranial vault, and a more highly angled inferior profile. Our results suggest that the FVB background is more resilient to Sprouty loss of function than either C57 or 129, and that Spry4 loss is generally less severe than loss of Spry1 or Spry2 While the specific modifier genes responsible for these significant background effects remain unknown, our results highlight the value of intercrossing mice of multiple inbred backgrounds to identify the genes and developmental interactions that modulate the severity of craniofacial dysmorphology. Our quantitative results represent an important first step toward elucidating genetic interactions underlying variation in robustness to known genetic perturbations in mice
Time-Efficient Hybrid Approach for Facial Expression Recognition
Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database
- …