Modifying the facial images with desired attributes is important, though
challenging tasks in computer vision, where it aims to modify single or
multiple attributes of the face image. Some of the existing methods are either
based on attribute independent approaches where the modification is done in the
latent representation or attribute dependent approaches. The attribute
independent methods are limited in performance as they require the desired
paired data for changing the desired attributes. Secondly, the attribute
independent constraint may result in the loss of information and, hence, fail
in generating the required attributes in the face image. In contrast, the
attribute dependent approaches are effective as these approaches are capable of
modifying the required features along with preserving the information in the
given image. However, attribute dependent approaches are sensitive and require
a careful model design in generating high-quality results. To address this
problem, we propose an attribute dependent face modification approach. The
proposed approach is based on two generators and two discriminators that
utilize the binary as well as the real representation of the attributes and, in
return, generate high-quality attribute modification results. Experiments on
the CelebA dataset show that our method effectively performs the multiple
attribute editing with preserving other facial details intactly