3D GANs have the ability to generate latent codes for entire 3D volumes
rather than only 2D images. These models offer desirable features like
high-quality geometry and multi-view consistency, but, unlike their 2D
counterparts, complex semantic image editing tasks for 3D GANs have only been
partially explored. To address this problem, we propose LatentSwap3D, a
semantic edit approach based on latent space discovery that can be used with
any off-the-shelf 3D or 2D GAN model and on any dataset. LatentSwap3D relies on
identifying the latent code dimensions corresponding to specific attributes by
feature ranking using a random forest classifier. It then performs the edit by
swapping the selected dimensions of the image being edited with the ones from
an automatically selected reference image. Compared to other latent space
control-based edit methods, which were mainly designed for 2D GANs, our method
on 3D GANs provides remarkably consistent semantic edits in a disentangled
manner and outperforms others both qualitatively and quantitatively. We show
results on seven 3D GANs (pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D, StyleNeRF,
and VolumeGAN) and on five datasets (FFHQ, AFHQ, Cats, MetFaces, and CompCars).Comment: The paper has been accepted by ICCV'23 AI3DC