98 research outputs found
Natural Image Matting via Guided Contextual Attention
Over the last few years, deep learning based approaches have achieved
outstanding improvements in natural image matting. Many of these methods can
generate visually plausible alpha estimations, but typically yield blurry
structures or textures in the semitransparent area. This is due to the local
ambiguity of transparent objects. One possible solution is to leverage the
far-surrounding information to estimate the local opacity. Traditional
affinity-based methods often suffer from the high computational complexity,
which are not suitable for high resolution alpha estimation. Inspired by
affinity-based method and the successes of contextual attention in inpainting,
we develop a novel end-to-end approach for natural image matting with a guided
contextual attention module, which is specifically designed for image matting.
Guided contextual attention module directly propagates high-level opacity
information globally based on the learned low-level affinity. The proposed
method can mimic information flow of affinity-based methods and utilize rich
features learned by deep neural networks simultaneously. Experiment results on
Composition-1k testing set and alphamatting.com benchmark dataset demonstrate
that our method outperforms state-of-the-art approaches in natural image
matting. Code and models are available at
https://github.com/Yaoyi-Li/GCA-Matting.Comment: AAAI-2
Context-Aware Image Matting for Simultaneous Foreground and Alpha Estimation
Natural image matting is an important problem in computer vision and
graphics. It is an ill-posed problem when only an input image is available
without any external information. While the recent deep learning approaches
have shown promising results, they only estimate the alpha matte. This paper
presents a context-aware natural image matting method for simultaneous
foreground and alpha matte estimation. Our method employs two encoder networks
to extract essential information for matting. Particularly, we use a matting
encoder to learn local features and a context encoder to obtain more global
context information. We concatenate the outputs from these two encoders and
feed them into decoder networks to simultaneously estimate the foreground and
alpha matte. To train this whole deep neural network, we employ both the
standard Laplacian loss and the feature loss: the former helps to achieve high
numerical performance while the latter leads to more perceptually plausible
results. We also report several data augmentation strategies that greatly
improve the network's generalization performance. Our qualitative and
quantitative experiments show that our method enables high-quality matting for
a single natural image. Our inference codes and models have been made publicly
available at https://github.com/hqqxyy/Context-Aware-Matting.Comment: This is the camera ready version of ICCV2019 pape
- …