50 research outputs found
Real-time deep hair matting on mobile devices
Augmented reality is an emerging technology in many application domains.
Among them is the beauty industry, where live virtual try-on of beauty products
is of great importance. In this paper, we address the problem of live hair
color augmentation. To achieve this goal, hair needs to be segmented quickly
and accurately. We show how a modified MobileNet CNN architecture can be used
to segment the hair in real-time. Instead of training this network using large
amounts of accurate segmentation data, which is difficult to obtain, we use
crowd sourced hair segmentation data. While such data is much simpler to
obtain, the segmentations there are noisy and coarse. Despite this, we show how
our system can produce accurate and fine-detailed hair mattes, while running at
over 30 fps on an iPad Pro tablet.Comment: 7 pages, 7 figures, submitted to CRV 201
SC2GAN: Rethinking Entanglement by Self-correcting Correlated GAN Space
Generative Adversarial Networks (GANs) can synthesize realistic images, with
the learned latent space shown to encode rich semantic information with various
interpretable directions. However, due to the unstructured nature of the
learned latent space, it inherits the bias from the training data where
specific groups of visual attributes that are not causally related tend to
appear together, a phenomenon also known as spurious correlations, e.g., age
and eyeglasses or women and lipsticks. Consequently, the learned distribution
often lacks the proper modelling of the missing examples. The interpolation
following editing directions for one attribute could result in entangled
changes with other attributes. To address this problem, previous works
typically adjust the learned directions to minimize the changes in other
attributes, yet they still fail on strongly correlated features. In this work,
we study the entanglement issue in both the training data and the learned
latent space for the StyleGAN2-FFHQ model. We propose a novel framework
SCGAN that achieves disentanglement by re-projecting low-density latent
code samples in the original latent space and correcting the editing directions
based on both the high-density and low-density regions. By leveraging the
original meaningful directions and semantic region-specific layers, our
framework interpolates the original latent codes to generate images with
attribute combination that appears infrequently, then inverts these samples
back to the original latent space. We apply our framework to pre-existing
methods that learn meaningful latent directions and showcase its strong
capability to disentangle the attributes with small amounts of low-density
region samples added.Comment: Accepted to the Out Of Distribution Generalization in Computer Vision
workshop at ICCV202
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers
Vision Transformers (ViT) have shown their competitive advantages
performance-wise compared to convolutional neural networks (CNNs) though they
often come with high computational costs. To this end, previous methods explore
different attention patterns by limiting a fixed number of spatially nearby
tokens to accelerate the ViT's multi-head self-attention (MHSA) operations.
However, such structured attention patterns limit the token-to-token
connections to their spatial relevance, which disregards learned semantic
connections from a full attention mask. In this work, we propose a novel
approach to learn instance-dependent attention patterns, by devising a
lightweight connectivity predictor module to estimate the connectivity score of
each pair of tokens. Intuitively, two tokens have high connectivity scores if
the features are considered relevant either spatially or semantically. As each
token only attends to a small number of other tokens, the binarized
connectivity masks are often very sparse by nature and therefore provide the
opportunity to accelerate the network via sparse computations. Equipped with
the learned unstructured attention pattern, sparse attention ViT (Sparsifiner)
produces a superior Pareto-optimal trade-off between FLOPs and top-1 accuracy
on ImageNet compared to token sparsity. Our method reduces 48% to 69% FLOPs of
MHSA while the accuracy drop is within 0.4%. We also show that combining
attention and token sparsity reduces ViT FLOPs by over 60%.Comment: Accepted at CVPR 202