10,818 research outputs found
Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network
Disguised face identification (DFI) is an extremely challenging problem due
to the numerous variations that can be introduced using different disguises.
This paper introduces a deep learning framework to first detect 14 facial
key-points which are then utilized to perform disguised face identification.
Since the training of deep learning architectures relies on large annotated
datasets, two annotated facial key-points datasets are introduced. The
effectiveness of the facial keypoint detection framework is presented for each
keypoint. The superiority of the key-point detection framework is also
demonstrated by a comparison with other deep networks. The effectiveness of
classification performance is also demonstrated by comparison with the
state-of-the-art face disguise classification methods.Comment: To Appear in the IEEE International Conference on Computer Vision
Workshops (ICCVW) 201
Learning Residual Images for Face Attribute Manipulation
Face attributes are interesting due to their detailed description of human
faces. Unlike prior researches working on attribute prediction, we address an
inverse and more challenging problem called face attribute manipulation which
aims at modifying a face image according to a given attribute value. Instead of
manipulating the whole image, we propose to learn the corresponding residual
image defined as the difference between images before and after the
manipulation. In this way, the manipulation can be operated efficiently with
modest pixel modification. The framework of our approach is based on the
Generative Adversarial Network. It consists of two image transformation
networks and a discriminative network. The transformation networks are
responsible for the attribute manipulation and its dual operation and the
discriminative network is used to distinguish the generated images from real
images. We also apply dual learning to allow transformation networks to learn
from each other. Experiments show that residual images can be effectively
learned and used for attribute manipulations. The generated images remain most
of the details in attribute-irrelevant areas
Persistent Evidence of Local Image Properties in Generic ConvNets
Supervised training of a convolutional network for object classification
should make explicit any information related to the class of objects and
disregard any auxiliary information associated with the capture of the image or
the variation within the object class. Does this happen in practice? Although
this seems to pertain to the very final layers in the network, if we look at
earlier layers we find that this is not the case. Surprisingly, strong spatial
information is implicit. This paper addresses this, in particular, exploiting
the image representation at the first fully connected layer, i.e. the global
image descriptor which has been recently shown to be most effective in a range
of visual recognition tasks. We empirically demonstrate evidences for the
finding in the contexts of four different tasks: 2d landmark detection, 2d
object keypoints prediction, estimation of the RGB values of input image, and
recovery of semantic label of each pixel. We base our investigation on a simple
framework with ridge rigression commonly across these tasks, and show results
which all support our insight. Such spatial information can be used for
computing correspondence of landmarks to a good accuracy, but should
potentially be useful for improving the training of the convolutional nets for
classification purposes
- …