2 research outputs found

    Fast Deep Matting for Portrait Animation on Mobile Phone

    Full text link
    Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait images. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.Comment: ACM Multimedia Conference (MM) 2017 camera-read

    A unified model sharing framework for moving object detection

    Full text link
    © 2015 Elsevier B.V. All rights reserved. Millions of surveillance cameras have been installed in public areas, producing vast amounts of video data every day. It is an urgent need to develop intelligent techniques to automatically detect and segment moving objects which have wide applications. Various approaches have been developed for moving object detection based on background modeling in the literature. Most of them focus on temporal information but partly or totally ignore spatial information, bringing about sensitivity to noise and background motion. In this paper, we propose a unified model sharing framework for moving object detection. To begin with, to exploit the spatial-temporal correlation across different pixels, we establish a many-to-one correspondence by model sharing between pixels, and a pixel is labeled into foreground or background by searching an optimal matched model in the neighborhood. Then a random sampling strategy is introduced for online update of the shared models. In this way, we can reduce the total number of models dramatically and match a proper model for each pixel accurately. Furthermore, existing approaches can be naturally embedded into the proposed sharing framework. Two popular approaches, statistical model and sample consensus model, are used to verify the effectiveness. Experiments and comparisons on ChangeDetection benchmark 2014 demonstrate the superiority of the model sharing solution
    corecore