1,485 research outputs found
Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition
The bilateral filter is a versatile non-linear filter that has found diverse
applications in image processing, computer vision, computer graphics, and
computational photography. A widely-used form of the filter is the Gaussian
bilateral filter in which both the spatial and range kernels are Gaussian. A
direct implementation of this filter requires operations per
pixel, where is the standard deviation of the spatial Gaussian. In
this paper, we propose an accurate approximation algorithm that can cut down
the computational complexity to per pixel for any arbitrary
(constant-time implementation). This is based on the observation that the range
kernel operates via the translations of a fixed Gaussian over the range space,
and that these translated Gaussians can be accurately approximated using the
so-called Gauss-polynomials. The overall algorithm emerging from this
approximation involves a series of spatial Gaussian filtering, which can be
implemented in constant-time using separability and recursion. We present some
preliminary results to demonstrate that the proposed algorithm compares
favorably with some of the existing fast algorithms in terms of speed and
accuracy.Comment: To appear in the IEEE International Conference on Image Processing
(ICIP 2015
Artifact reduction for separable non-local means
It was recently demonstrated [J. Electron. Imaging, 25(2), 2016] that one can
perform fast non-local means (NLM) denoising of one-dimensional signals using a
method called lifting. The cost of lifting is independent of the patch length,
which dramatically reduces the run-time for large patches. Unfortunately, it is
difficult to directly extend lifting for non-local means denoising of images.
To bypass this, the authors proposed a separable approximation in which the
image rows and columns are filtered using lifting. The overall algorithm is
significantly faster than NLM, and the results are comparable in terms of PSNR.
However, the separable processing often produces vertical and horizontal
stripes in the image. This problem was previously addressed by using a
bilateral filter-based post-smoothing, which was effective in removing some of
the stripes. In this letter, we demonstrate that stripes can be mitigated in
the first place simply by involving the neighboring rows (or columns) in the
filtering. In other words, we use a two-dimensional search (similar to NLM),
while still using one-dimensional patches (as in the previous proposal). The
novelty is in the observation that one can use lifting for performing
two-dimensional searches. The proposed approach produces artifact-free images,
whose quality and PSNR are comparable to NLM, while being significantly faster.Comment: To appear in Journal of Electronic Imagin
Fast Separable Non-Local Means
We propose a simple and fast algorithm called PatchLift for computing
distances between patches (contiguous block of samples) extracted from a given
one-dimensional signal. PatchLift is based on the observation that the patch
distances can be efficiently computed from a matrix that is derived from the
one-dimensional signal using lifting; importantly, the number of operations
required to compute the patch distances using this approach does not scale with
the patch length. We next demonstrate how PatchLift can be used for patch-based
denoising of images corrupted with Gaussian noise. In particular, we propose a
separable formulation of the classical Non-Local Means (NLM) algorithm that can
be implemented using PatchLift. We demonstrate that the PatchLift-based
implementation of separable NLM is few orders faster than standard NLM, and is
competitive with existing fast implementations of NLM. Moreover, its denoising
performance is shown to be consistently superior to that of NLM and some of its
variants, both in terms of PSNR/SSIM and visual quality
Improvements on "Fast space-variant elliptical filtering using box splines"
It is well-known that box filters can be efficiently computed using
pre-integrations and local finite-differences
[Crow1984,Heckbert1986,Viola2001]. By generalizing this idea and by combining
it with a non-standard variant of the Central Limit Theorem, a constant-time or
O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform
space-variant filtering using Gaussian-like kernels. The algorithm was based on
the observation that both isotropic and anisotropic Gaussians could be
approximated using certain bivariate splines called box splines. The attractive
feature of the algorithm was that it allowed one to continuously control the
shape and size (covariance) of the filter, and that it had a fixed
computational cost per pixel, irrespective of the size of the filter. The
algorithm, however, offered a limited control on the covariance and accuracy of
the Gaussian approximation. In this work, we propose some improvements by
appropriately modifying the algorithm in [Chaudhury2010].Comment: 7 figure
Image Denoising using Optimally Weighted Bilateral Filters: A Sure and Fast Approach
The bilateral filter is known to be quite effective in denoising images
corrupted with small dosages of additive Gaussian noise. The denoising
performance of the filter, however, is known to degrade quickly with the
increase in noise level. Several adaptations of the filter have been proposed
in the literature to address this shortcoming, but often at a substantial
computational overhead. In this paper, we report a simple pre-processing step
that can substantially improve the denoising performance of the bilateral
filter, at almost no additional cost. The modified filter is designed to be
robust at large noise levels, and often tends to perform poorly below a certain
noise threshold. To get the best of the original and the modified filter, we
propose to combine them in a weighted fashion, where the weights are chosen to
minimize (a surrogate of) the oracle mean-squared-error (MSE). The
optimally-weighted filter is thus guaranteed to perform better than either of
the component filters in terms of the MSE, at all noise levels. We also provide
a fast algorithm for the weighted filtering. Visual and quantitative denoising
results on standard test images are reported which demonstrate that the
improvement over the original filter is significant both visually and in terms
of PSNR. Moreover, the denoising performance of the optimally-weighted
bilateral filter is competitive with the computation-intensive non-local means
filter.Comment: To appear in the IEEE International Conference on Image Processing
(ICIP 2015). Link to the Matlab code added in the revisio
A new ADMM algorithm for the Euclidean median and its application to robust patch regression
The Euclidean Median (EM) of a set of points in an Euclidean space
is the point x minimizing the (weighted) sum of the Euclidean distances of x to
the points in . While there exits no closed-form expression for the EM,
it can nevertheless be computed using iterative methods such as the Wieszfeld
algorithm. The EM has classically been used as a robust estimator of centrality
for multivariate data. It was recently demonstrated that the EM can be used to
perform robust patch-based denoising of images by generalizing the popular
Non-Local Means algorithm. In this paper, we propose a novel algorithm for
computing the EM (and its box-constrained counterpart) using variable splitting
and the method of augmented Lagrangian. The attractive feature of this approach
is that the subproblems involved in the ADMM-based optimization of the
augmented Lagrangian can be resolved using simple closed-form projections. The
proposed ADMM solver is used for robust patch-based image denoising and is
shown to exhibit faster convergence compared to an existing solver.Comment: 5 pages, 3 figures, 1 table. To appear in Proc. IEEE International
Conference on Acoustics, Speech, and Signal Processing, April 19-24, 201
- …