2,865 research outputs found
Hyperanalytic denoising
A new threshold rule for the estimation of a deterministic image immersed in noise is proposed. The full estimation procedure is based on a separable wavelet decomposition of the observed image, and the estimation is improved by introducing the new threshold to estimate the decomposition coefficients. The observed wavelet coefficients are thresholded, using the magnitudes of wavelet transforms of a small number of "replicates" of the image. The "replicates" are calculated by extending the image into a vector-valued hyperanalytic signal. More than one hyperanalytic signal may be chosen, and either the hypercomplex or Riesz transforms are used, to calculate this object. The deterministic and stochastic properties of the observed wavelet coefficients of the hyperanalytic signal, at a fixed scale and position index, are determined. A "universal" threshold is calculated for the proposed procedure. An expression for the risk of an individual coefficient is derived. The risk is calculated explicitly when the "universal" threshold is used and is shown to be less than the risk of "universal" hard thresholding, under certain conditions. The proposed method is implemented and the derived theoretical risk reductions substantiated
Extreme Value Analysis of Empirical Frame Coefficients and Implications for Denoising by Soft-Thresholding
Denoising by frame thresholding is one of the most basic and efficient
methods for recovering a discrete signal or image from data that are corrupted
by additive Gaussian white noise. The basic idea is to select a frame of
analyzing elements that separates the data in few large coefficients due to the
signal and many small coefficients mainly due to the noise \epsilon_n. Removing
all data coefficients being in magnitude below a certain threshold yields a
reconstruction of the original signal. In order to properly balance the amount
of noise to be removed and the relevant signal features to be kept, a precise
understanding of the statistical properties of thresholding is important. For
that purpose we derive the asymptotic distribution of max_{\omega \in \Omega_n}
|| for a wide class of redundant frames
(\phi_\omega^n: \omega \in \Omega_n}. Based on our theoretical results we give
a rationale for universal extreme value thresholding techniques yielding
asymptotically sharp confidence regions and smoothness estimates corresponding
to prescribed significance levels. The results cover many frames used in
imaging and signal recovery applications, such as redundant wavelet systems,
curvelet frames, or unions of bases. We show that `generically' a standard
Gumbel law results as it is known from the case of orthonormal wavelet bases.
However, for specific highly redundant frames other limiting laws may occur. We
indeed verify that the translation invariant wavelet transform shows a
different asymptotic behaviour.Comment: [Content: 39 pages, 4 figures] Note that in this version 4 we have
slightely changed the title of the paper and we have rewritten parts of the
introduction. Except for corrected typos the other parts of the paper are the
same as the original versions
Deep Mean-Shift Priors for Image Restoration
In this paper we introduce a natural image prior that directly represents a
Gaussian-smoothed version of the natural image distribution. We include our
prior in a formulation of image restoration as a Bayes estimator that also
allows us to solve noise-blind image restoration problems. We show that the
gradient of our prior corresponds to the mean-shift vector on the natural image
distribution. In addition, we learn the mean-shift vector field using denoising
autoencoders, and use it in a gradient descent approach to perform Bayes risk
minimization. We demonstrate competitive results for noise-blind deblurring,
super-resolution, and demosaicing.Comment: NIPS 201
Pointwise adaptive estimation for robust and quantile regression
A nonparametric procedure for robust regression estimation and for quantile
regression is proposed which is completely data-driven and adapts locally to
the regularity of the regression function. This is achieved by considering in
each point M-estimators over different local neighbourhoods and by a local
model selection procedure based on sequential testing. Non-asymptotic risk
bounds are obtained, which yield rate-optimality for large sample asymptotics
under weak conditions. Simulations for different univariate median regression
models show good finite sample properties, also in comparison to traditional
methods. The approach is extended to image denoising and applied to CT scans in
cancer research
Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data
The ubiquity of integrating detectors in imaging and other applications
implies that a variety of real-world data are well modeled as Poisson random
variables whose means are in turn proportional to an underlying vector-valued
signal of interest. In this article, we first show how the so-called Skellam
distribution arises from the fact that Haar wavelet and filterbank transform
coefficients corresponding to measurements of this type are distributed as sums
and differences of Poisson counts. We then provide two main theorems on Skellam
shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting
and the other providing for unbiased risk estimation in a frequentist context.
These results serve to yield new estimators in the Haar transform domain,
including an unbiased risk estimate for shrinkage of Haar-Fisz
variance-stabilized data, along with accompanying low-complexity algorithms for
inference. We conclude with a simulation study demonstrating the efficacy of
our Skellam shrinkage estimators both for the standard univariate wavelet test
functions as well as a variety of test images taken from the image processing
literature, confirming that they offer substantial performance improvements
over existing alternatives.Comment: 27 pages, 8 figures, slight formatting changes; submitted for
publicatio
- …