1,625 research outputs found
Contrastive Learning for Diverse Disentangled Foreground Generation
We introduce a new method for diverse foreground generation with explicit
control over various factors. Existing image inpainting based foreground
generation methods often struggle to generate diverse results and rarely allow
users to explicitly control specific factors of variation (e.g., varying the
facial identity or expression for face inpainting results). We leverage
contrastive learning with latent codes to generate diverse foreground results
for the same masked input. Specifically, we define two sets of latent codes,
where one controls a pre-defined factor (``known''), and the other controls the
remaining factors (``unknown''). The sampled latent codes from the two sets
jointly bi-modulate the convolution kernels to guide the generator to
synthesize diverse results. Experiments demonstrate the superiority of our
method over state-of-the-arts in result diversity and generation
controllability.Comment: ECCV 202
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
BlendFace: Re-designing Identity Encoders for Face-Swapping
The great advancements of generative adversarial networks and face
recognition models in computer vision have made it possible to swap identities
on images from single sources. Although a lot of studies seems to have proposed
almost satisfactory solutions, we notice previous methods still suffer from an
identity-attribute entanglement that causes undesired attributes swapping
because widely used identity encoders, eg, ArcFace, have some crucial attribute
biases owing to their pretraining on face recognition tasks. To address this
issue, we design BlendFace, a novel identity encoder for face-swapping. The key
idea behind BlendFace is training face recognition models on blended images
whose attributes are replaced with those of another mitigates inter-personal
biases such as hairsyles. BlendFace feeds disentangled identity features into
generators and guides generators properly as an identity loss function.
Extensive experiments demonstrate that BlendFace improves the
identity-attribute disentanglement in face-swapping models, maintaining a
comparable quantitative performance to previous methods.Comment: ICCV2023. Code: https://github.com/mapooon/BlendFace, Webpage:
https://mapooon.github.io/BlendFacePage
- …