4,836 research outputs found
Exploiting Image Local And Nonlocal Consistency For Mixed Gaussian-Impulse Noise Removal
Most existing image denoising algorithms can only deal with a single type of
noise, which violates the fact that the noisy observed images in practice are
often suffered from more than one type of noise during the process of
acquisition and transmission. In this paper, we propose a new variational
algorithm for mixed Gaussian-impulse noise removal by exploiting image local
consistency and nonlocal consistency simultaneously. Specifically, the local
consistency is measured by a hyper-Laplace prior, enforcing the local
smoothness of images, while the nonlocal consistency is measured by
three-dimensional sparsity of similar blocks, enforcing the nonlocal
self-similarity of natural images. Moreover, a Split-Bregman based technique is
developed to solve the above optimization problem efficiently. Extensive
experiments for mixed Gaussian plus impulse noise show that significant
performance improvements over the current state-of-the-art schemes have been
achieved, which substantiates the effectiveness of the proposed algorithm.Comment: 6 pages, 4 figures, 3 tables, to be published at IEEE Int. Conf. on
Multimedia & Expo (ICME) 201
Detail-preserving and Content-aware Variational Multi-view Stereo Reconstruction
Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view
images is a fundamental yet active research area in computer vision. Despite
the steady progress in multi-view stereo reconstruction, most existing methods
are still limited in recovering fine-scale details and sharp features while
suppressing noises, and may fail in reconstructing regions with few textures.
To address these limitations, this paper presents a Detail-preserving and
Content-aware Variational (DCV) multi-view stereo method, which reconstructs
the 3D surface by alternating between reprojection error minimization and mesh
denoising. In reprojection error minimization, we propose a novel inter-image
similarity measure, which is effective to preserve fine-scale details of the
reconstructed surface and builds a connection between guided image filtering
and image registration. In mesh denoising, we propose a content-aware
-minimization algorithm by adaptively estimating the value and
regularization parameters based on the current input. It is much more promising
in suppressing noise while preserving sharp features than conventional
isotropic mesh smoothing. Experimental results on benchmark datasets
demonstrate that our DCV method is capable of recovering more surface details,
and obtains cleaner and more accurate reconstructions than state-of-the-art
methods. In particular, our method achieves the best results among all
published methods on the Middlebury dino ring and dino sparse ring datasets in
terms of both completeness and accuracy.Comment: 14 pages,16 figures. Submitted to IEEE Transaction on image
processin
BM3D Frames and Variational Image Deblurring
A family of the Block Matching 3-D (BM3D) algorithms for various imaging
problems has been recently proposed within the framework of nonlocal patch-wise
image modeling [1], [2]. In this paper we construct analysis and synthesis
frames, formalizing the BM3D image modeling and use these frames to develop
novel iterative deblurring algorithms. We consider two different formulations
of the deblurring problem: one given by minimization of the single objective
function and another based on the Nash equilibrium balance of two objective
functions. The latter results in an algorithm where the denoising and
deblurring operations are decoupled. The convergence of the developed
algorithms is proved. Simulation experiments show that the decoupled algorithm
derived from the Nash equilibrium formulation demonstrates the best numerical
and visual results and shows superiority with respect to the state of the art
in the field, confirming a valuable potential of BM3D-frames as an advanced
image modeling tool.Comment: Submitted to IEEE Transactions on Image Processing on May 18, 2011.
implementation of the proposed algorithm is available as part of the BM3D
package at http://www.cs.tut.fi/~foi/GCF-BM3
A Novel Euler's Elastica based Segmentation Approach for Noisy Images via using the Progressive Hedging Algorithm
Euler's Elastica based unsupervised segmentation models have strong
capability of completing the missing boundaries for existing objects in a clean
image, but they are not working well for noisy images. This paper aims to
establish a Euler's Elastica based approach that properly deals with random
noises to improve the segmentation performance for noisy images. We solve the
corresponding optimization problem via using the progressive hedging algorithm
(PHA) with a step length suggested by the alternating direction method of
multipliers (ADMM). Technically, all the simplified convex versions of the
subproblems derived from the major framework of PHA can be obtained by using
the curvature weighted approach and the convex relaxation method. Then an
alternating optimization strategy is applied with the merits of using some
powerful accelerating techniques including the fast Fourier transform (FFT) and
generalized soft threshold formulas. Extensive experiments have been conducted
on both synthetic and real images, which validated some significant gains of
the proposed segmentation models and demonstrated the advantages of the
developed algorithm
Universal Denoising Networks : A Novel CNN Architecture for Image Denoising
We design a novel network architecture for learning discriminative image
models that are employed to efficiently tackle the problem of grayscale and
color image denoising. Based on the proposed architecture, we introduce two
different variants. The first network involves convolutional layers as a core
component, while the second one relies instead on non-local filtering layers
and thus it is able to exploit the inherent non-local self-similarity property
of natural images. As opposed to most of the existing deep network approaches,
which require the training of a specific model for each considered noise level,
the proposed models are able to handle a wide range of noise levels using a
single set of learned parameters, while they are very robust when the noise
degrading the latent image does not match the statistics of the noise used
during training. The latter argument is supported by results that we report on
publicly available images corrupted by unknown noise and which we compare
against solutions obtained by competing methods. At the same time the
introduced networks achieve excellent results under additive white Gaussian
noise (AWGN), which are comparable to those of the current state-of-the-art
network, while they depend on a more shallow architecture with the number of
trained parameters being one order of magnitude smaller. These properties make
the proposed networks ideal candidates to serve as sub-solvers on restoration
methods that deal with general inverse imaging problems such as deblurring,
demosaicking, superresolution, etc.Comment: Camera ready paper to appear in the Proceedings of CVPR 201
QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data
Optical coherence tomography (OCT) enables high-resolution and non-invasive
3D imaging of the human retina but is inherently impaired by speckle noise.
This paper introduces a spatio-temporal denoising algorithm for OCT data on a
B-scan level using a novel quantile sparse image (QuaSI) prior. To remove
speckle noise while preserving image structures of diagnostic relevance, we
implement our QuaSI prior via median filter regularization coupled with a Huber
data fidelity model in a variational approach. For efficient energy
minimization, we develop an alternating direction method of multipliers (ADMM)
scheme using a linearization of median filtering. Our spatio-temporal method
can handle both, denoising of single B-scans and temporally consecutive
B-scans, to gain volumetric OCT data with enhanced signal-to-noise ratio. Our
algorithm based on 4 B-scans only achieved comparable performance to averaging
13 B-scans and outperformed other current denoising methods.Comment: submitted to MICCAI'1
- …