60,763 research outputs found
Encouraging the perceptual underdog: positive affective priming of nonpreferred local–global processes
Two experiments examined affective priming of global and local perception. Participants attempted to detect a target that might be present as either a global or a local shape. Verbal primes were used in 1 experiment, and pictorial primes were used in the other. In both experiments, positive primes led to improved performance on the nonpreferred dimension. For participants exhibiting global precedence, detection of local targets was significantly improved, whereas for participants exhibiting local precedence, detection of global targets was significantly improved. The results provide support for an interpretation of the effects of positive affective priming in terms of increased perceptual flexibility
Recommended from our members
Background suppressing Gabor energy filtering
In the field of facial emotion recognition, early research advanced with the use of Gabor filters. However, these filters lack generalization and result in undesirably large feature vector size. In recent work, more attention has been given to other local appearance features. Two desired characteristics in a facial appearance feature are generalization capability, and the compactness of representation. In this paper, we propose a novel texture feature inspired by Gabor energy filters, called background suppressing Gabor energy filtering. The feature has a generalization component that removes background texture. It has a reduced feature vector size due to maximal representation and soft orientation histograms, and it is awhite box representation. We demonstrate improved performance on the non-trivial Audio/Visual Emotion Challenge 2012 grand-challenge dataset by a factor of 7.17 over the Gabor filter on the development set. We also demonstrate applicability of our approach beyond facial emotion recognition which yields improved classification rate over the Gabor filter for four bioimaging datasets by an average of 8.22%
Group Affect Prediction Using Multimodal Distributions
We describe our approach towards building an efficient predictive model to
detect emotions for a group of people in an image. We have proposed that
training a Convolutional Neural Network (CNN) model on the emotion heatmaps
extracted from the image, outperforms a CNN model trained entirely on the raw
images. The comparison of the models have been done on a recently published
dataset of Emotion Recognition in the Wild (EmotiW) challenge, 2017. The
proposed method achieved validation accuracy of 55.23% which is 2.44% above the
baseline accuracy, provided by the EmotiW organizers.Comment: This research paper has been accepted at Workshop on Computer Vision
for Active and Assisted Living, WACV 201
Group-level Emotion Recognition using Transfer Learning from Face Identification
In this paper, we describe our algorithmic approach, which was used for
submissions in the fifth Emotion Recognition in the Wild (EmotiW 2017)
group-level emotion recognition sub-challenge. We extracted feature vectors of
detected faces using the Convolutional Neural Network trained for face
identification task, rather than traditional pre-training on emotion
recognition problems. In the final pipeline an ensemble of Random Forest
classifiers was learned to predict emotion score using available training set.
In case when the faces have not been detected, one member of our ensemble
extracts features from the whole image. During our experimental study, the
proposed approach showed the lowest error rate when compared to other explored
techniques. In particular, we achieved 75.4% accuracy on the validation data,
which is 20% higher than the handcrafted feature-based baseline. The source
code using Keras framework is publicly available.Comment: 5 pages, 3 figures, accepted for publication at ICMI17 (EmotiW Grand
Challenge
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
- …