6,699 research outputs found
Geometry-Aware Face Completion and Editing
Face completion is a challenging generation task because it requires
generating visually pleasing new pixels that are semantically consistent with
the unmasked face region. This paper proposes a geometry-aware Face Completion
and Editing NETwork (FCENet) by systematically studying facial geometry from
the unmasked region. Firstly, a facial geometry estimator is learned to
estimate facial landmark heatmaps and parsing maps from the unmasked face
image. Then, an encoder-decoder structure generator serves to complete a face
image and disentangle its mask areas conditioned on both the masked face image
and the estimated facial geometry images. Besides, since low-rank property
exists in manually labeled masks, a low-rank regularization term is imposed on
the disentangled masks, enforcing our completion network to manage occlusion
area with various shape and size. Furthermore, our network can generate diverse
results from the same masked input by modifying estimated facial geometry,
which provides a flexible mean to edit the completed face appearance. Extensive
experimental results qualitatively and quantitatively demonstrate that our
network is able to generate visually pleasing face completion results and edit
face attributes as well
Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment
Facial action unit (AU) detection and face alignment are two highly
correlated tasks since facial landmarks can provide precise AU locations to
facilitate the extraction of meaningful local features for AU detection. Most
existing AU detection works often treat face alignment as a preprocessing and
handle the two tasks independently. In this paper, we propose a novel
end-to-end deep learning framework for joint AU detection and face alignment,
which has not been explored before. In particular, multi-scale shared features
are learned firstly, and high-level features of face alignment are fed into AU
detection. Moreover, to extract precise local features, we propose an adaptive
attention learning module to refine the attention map of each AU adaptively.
Finally, the assembled local features are integrated with face alignment
features and global features for AU detection. Experiments on BP4D and DISFA
benchmarks demonstrate that our framework significantly outperforms the
state-of-the-art methods for AU detection.Comment: This paper has been accepted by ECCV 201
DCTNet : A Simple Learning-free Approach for Face Recognition
PCANet was proposed as a lightweight deep learning network that mainly
leverages Principal Component Analysis (PCA) to learn multistage filter banks
followed by binarization and block-wise histograming. PCANet was shown worked
surprisingly well in various image classification tasks. However, PCANet is
data-dependence hence inflexible. In this paper, we proposed a
data-independence network, dubbed DCTNet for face recognition in which we adopt
Discrete Cosine Transform (DCT) as filter banks in place of PCA. This is
motivated by the fact that 2D DCT basis is indeed a good approximation for high
ranked eigenvectors of PCA. Both 2D DCT and PCA resemble a kind of modulated
sine-wave patterns, which can be perceived as a bandpass filter bank. DCTNet is
free from learning as 2D DCT bases can be computed in advance. Besides that, we
also proposed an effective method to regulate the block-wise histogram feature
vector of DCTNet for robustness. It is shown to provide surprising performance
boost when the probe image is considerably different in appearance from the
gallery image. We evaluate the performance of DCTNet extensively on a number of
benchmark face databases and being able to achieve on par with or often better
accuracy performance than PCANet.Comment: APSIPA ASC 201
Facial emotion recognition using min-max similarity classifier
Recognition of human emotions from the imaging templates is useful in a wide
variety of human-computer interaction and intelligent systems applications.
However, the automatic recognition of facial expressions using image template
matching techniques suffer from the natural variability with facial features
and recording conditions. In spite of the progress achieved in facial emotion
recognition in recent years, the effective and computationally simple feature
selection and classification technique for emotion recognition is still an open
problem. In this paper, we propose an efficient and straightforward facial
emotion recognition algorithm to reduce the problem of inter-class pixel
mismatch during classification. The proposed method includes the application of
pixel normalization to remove intensity offsets followed-up with a Min-Max
metric in a nearest neighbor classifier that is capable of suppressing feature
outliers. The results indicate an improvement of recognition performance from
92.85% to 98.57% for the proposed Min-Max classification method when tested on
JAFFE database. The proposed emotion recognition technique outperforms the
existing template matching methods
- …