1 research outputs found
Adversarial Learning of Disentangled and Generalizable Representations for Visual Attributes
Recently, a multitude of methods for image-to-image translation have
demonstrated impressive results on problems such as multi-domain or
multi-attribute transfer. The vast majority of such works leverages the
strengths of adversarial learning and deep convolutional autoencoders to
achieve realistic results by well-capturing the target data distribution.
Nevertheless, the most prominent representatives of this class of methods do
not facilitate semantic structure in the latent space, and usually rely on
binary domain labels for test-time transfer. This leads to rigid models, unable
to capture the variance of each domain label. In this light, we propose a novel
adversarial learning method that (i) facilitates the emergence of latent
structure by semantically disentangling sources of variation, and (ii)
encourages learning generalizable, continuous, and transferable latent codes
that enable flexible attribute mixing. This is achieved by introducing a novel
loss function that encourages representations to result in uniformly
distributed class posteriors for disentangled attributes. In tandem with an
algorithm for inducing generalizable properties, the resulting representations
can be utilized for a variety of tasks such as intensity-preserving
multi-attribute image translation and synthesis, without requiring labelled
test data. We demonstrate the merits of the proposed method by a set of
qualitative and quantitative experiments on popular databases such as MultiPIE,
RaFD, and BU-3DFE, where our method outperforms other, state-of-the-art methods
in tasks such as intensity-preserving multi-attribute transfer and synthesis