We present a novel method to solve image analogy problems : it allows to
learn the relation between paired images present in training data, and then
generalize and generate images that correspond to the relation, but were never
seen in the training set. Therefore, we call the method Conditional Analogy
Generative Adversarial Network (CAGAN), as it is based on adversarial training
and employs deep convolutional neural networks. An especially interesting
application of that technique is automatic swapping of clothing on fashion
model photos. Our work has the following contributions. First, the definition
of the end-to-end trainable CAGAN architecture, which implicitly learns
segmentation masks without expensive supervised labeling data. Second,
experimental results show plausible segmentation masks and often convincing
swapped images, given the target article. Finally, we discuss the next steps
for that technique: neural network architecture improvements and more advanced
applications.Comment: To appear at the International Conference on Computer Vision, ICCV
2017, Workshop on Computer Vision for Fashio