Semantic facial attribute editing using pre-trained Generative Adversarial
Networks (GANs) has attracted a great deal of attention and effort from
researchers in recent years. Due to the high quality of face images generated
by StyleGANs, much work has focused on the StyleGANs' latent space and the
proposed methods for facial image editing. Although these methods have achieved
satisfying results for manipulating user-intended attributes, they have not
fulfilled the goal of preserving the identity, which is an important challenge.
We present ID-Style, a new architecture capable of addressing the problem of
identity loss during attribute manipulation. The key components of ID-Style
include Learnable Global Direction (LGD), which finds a shared and semi-sparse
direction for each attribute, and an Instance-Aware Intensity Predictor (IAIP)
network, which finetunes the global direction according to the input instance.
Furthermore, we introduce two losses during training to enforce the LGD to find
semi-sparse semantic directions, which along with the IAIP, preserve the
identity of the input instance. Despite reducing the size of the network by
roughly 95% as compared to similar state-of-the-art works, it outperforms
baselines by 10% and 7% in Identity preserving metric (FRS) and average
accuracy of manipulation (mACC), respectively