773 research outputs found
Deep Saliency with Encoded Low level Distance Map and High Level Features
Recent advances in saliency detection have utilized deep learning to obtain
high level features to detect salient regions in a scene. These advances have
demonstrated superior results over previous works that utilize hand-crafted low
level features for saliency detection. In this paper, we demonstrate that
hand-crafted features can provide complementary information to enhance
performance of saliency detection that utilizes only high level features. Our
method utilizes both high level and low level features for saliency detection
under a unified deep learning framework. The high level features are extracted
using the VGG-net, and the low level features are compared with other parts of
an image to form a low level distance map. The low level distance map is then
encoded using a convolutional neural network(CNN) with multiple 1X1
convolutional and ReLU layers. We concatenate the encoded low level distance
map and the high level features, and connect them to a fully connected neural
network classifier to evaluate the saliency of a query region. Our experiments
show that our method can further improve the performance of state-of-the-art
deep learning-based saliency detection methods.Comment: Accepted by IEEE Conference on Computer Vision and Pattern
Recognition(CVPR) 2016. Project page:
https://github.com/gylee1103/SaliencyEL
Attribute-Guided Face Generation Using Conditional CycleGAN
We are interested in attribute-guided face generation: given a low-res face
input image, an attribute vector that can be extracted from a high-res image
(attribute image), our new method generates a high-res face image for the
low-res input that satisfies the given attributes. To address this problem, we
condition the CycleGAN and propose conditional CycleGAN, which is designed to
1) handle unpaired training data because the training low/high-res and high-res
attribute images may not necessarily align with each other, and to 2) allow
easy control of the appearance of the generated face via the input attributes.
We demonstrate impressive results on the attribute-guided conditional CycleGAN,
which can synthesize realistic face images with appearance easily controlled by
user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using
the attribute image as identity to produce the corresponding conditional vector
and by incorporating a face verification network, the attribute-guided network
becomes the identity-guided conditional CycleGAN which produces impressive and
interesting results on identity transfer. We demonstrate three applications on
identity-guided conditional CycleGAN: identity-preserving face superresolution,
face swapping, and frontal face generation, which consistently show the
advantage of our new method.Comment: ECCV 201
- …