304 research outputs found
Fast Deep Matting for Portrait Animation on Mobile Phone
Image matting plays an important role in image and video editing. However,
the formulation of image matting is inherently ill-posed. Traditional methods
usually employ interaction to deal with the image matting problem with trimaps
and strokes, and cannot run on the mobile phone in real-time. In this paper, we
propose a real-time automatic deep matting approach for mobile devices. By
leveraging the densely connected blocks and the dilated convolution, a light
full convolutional network is designed to predict a coarse binary mask for
portrait images. And a feathering block, which is edge-preserving and matting
adaptive, is further developed to learn the guided filter and transform the
binary mask into alpha matte. Finally, an automatic portrait animation system
based on fast deep matting is built on mobile devices, which does not need any
interaction and can realize real-time matting with 15 fps. The experiments show
that the proposed approach achieves comparable results with the
state-of-the-art matting solvers.Comment: ACM Multimedia Conference (MM) 2017 camera-read
Natural Image Matting via Guided Contextual Attention
Over the last few years, deep learning based approaches have achieved
outstanding improvements in natural image matting. Many of these methods can
generate visually plausible alpha estimations, but typically yield blurry
structures or textures in the semitransparent area. This is due to the local
ambiguity of transparent objects. One possible solution is to leverage the
far-surrounding information to estimate the local opacity. Traditional
affinity-based methods often suffer from the high computational complexity,
which are not suitable for high resolution alpha estimation. Inspired by
affinity-based method and the successes of contextual attention in inpainting,
we develop a novel end-to-end approach for natural image matting with a guided
contextual attention module, which is specifically designed for image matting.
Guided contextual attention module directly propagates high-level opacity
information globally based on the learned low-level affinity. The proposed
method can mimic information flow of affinity-based methods and utilize rich
features learned by deep neural networks simultaneously. Experiment results on
Composition-1k testing set and alphamatting.com benchmark dataset demonstrate
that our method outperforms state-of-the-art approaches in natural image
matting. Code and models are available at
https://github.com/Yaoyi-Li/GCA-Matting.Comment: AAAI-2
- …