33,492 research outputs found
Detail-preserving and Content-aware Variational Multi-view Stereo Reconstruction
Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view
images is a fundamental yet active research area in computer vision. Despite
the steady progress in multi-view stereo reconstruction, most existing methods
are still limited in recovering fine-scale details and sharp features while
suppressing noises, and may fail in reconstructing regions with few textures.
To address these limitations, this paper presents a Detail-preserving and
Content-aware Variational (DCV) multi-view stereo method, which reconstructs
the 3D surface by alternating between reprojection error minimization and mesh
denoising. In reprojection error minimization, we propose a novel inter-image
similarity measure, which is effective to preserve fine-scale details of the
reconstructed surface and builds a connection between guided image filtering
and image registration. In mesh denoising, we propose a content-aware
-minimization algorithm by adaptively estimating the value and
regularization parameters based on the current input. It is much more promising
in suppressing noise while preserving sharp features than conventional
isotropic mesh smoothing. Experimental results on benchmark datasets
demonstrate that our DCV method is capable of recovering more surface details,
and obtains cleaner and more accurate reconstructions than state-of-the-art
methods. In particular, our method achieves the best results among all
published methods on the Middlebury dino ring and dino sparse ring datasets in
terms of both completeness and accuracy.Comment: 14 pages,16 figures. Submitted to IEEE Transaction on image
processin
Guided patch-wise nonlocal SAR despeckling
We propose a new method for SAR image despeckling which leverages information
drawn from co-registered optical imagery. Filtering is performed by plain
patch-wise nonlocal means, operating exclusively on SAR data. However, the
filtering weights are computed by taking into account also the optical guide,
which is much cleaner than the SAR data, and hence more discriminative. To
avoid injecting optical-domain information into the filtered image, a
SAR-domain statistical test is preliminarily performed to reject right away any
risky predictor. Experiments on two SAR-optical datasets prove the proposed
method to suppress very effectively the speckle, preserving structural details,
and without introducing visible filtering artifacts. Overall, the proposed
method compares favourably with all state-of-the-art despeckling filters, and
also with our own previous optical-guided filter
Fast Deep Matting for Portrait Animation on Mobile Phone
Image matting plays an important role in image and video editing. However,
the formulation of image matting is inherently ill-posed. Traditional methods
usually employ interaction to deal with the image matting problem with trimaps
and strokes, and cannot run on the mobile phone in real-time. In this paper, we
propose a real-time automatic deep matting approach for mobile devices. By
leveraging the densely connected blocks and the dilated convolution, a light
full convolutional network is designed to predict a coarse binary mask for
portrait images. And a feathering block, which is edge-preserving and matting
adaptive, is further developed to learn the guided filter and transform the
binary mask into alpha matte. Finally, an automatic portrait animation system
based on fast deep matting is built on mobile devices, which does not need any
interaction and can realize real-time matting with 15 fps. The experiments show
that the proposed approach achieves comparable results with the
state-of-the-art matting solvers.Comment: ACM Multimedia Conference (MM) 2017 camera-read
- …