1,330 research outputs found
Minimax Structured Normal Means Inference
We provide a unified treatment of a broad class of noisy structure recovery
problems, known as structured normal means problems. In this setting, the goal
is to identify, from a finite collection of Gaussian distributions with
different means, the distribution that produced some observed data. Recent work
has studied several special cases including sparse vectors, biclusters, and
graph-based structures. We establish nearly matching upper and lower bounds on
the minimax probability of error for any structured normal means problem, and
we derive an optimality certificate for the maximum likelihood estimator, which
can be applied to many instantiations. We also consider an experimental design
setting, where we generalize our minimax bounds and derive an algorithm for
computing a design strategy with a certain optimality property. We show that
our results give tight minimax bounds for many structure recovery problems and
consider some consequences for interactive sampling
Classification with Asymmetric Label Noise: Consistency and Maximal Denoising
In many real-world classification problems, the labels of training examples
are randomly corrupted. Most previous theoretical work on classification with
label noise assumes that the two classes are separable, that the label noise is
independent of the true class label, or that the noise proportions for each
class are known. In this work, we give conditions that are necessary and
sufficient for the true class-conditional distributions to be identifiable.
These conditions are weaker than those analyzed previously, and allow for the
classes to be nonseparable and the noise levels to be asymmetric and unknown.
The conditions essentially state that a majority of the observed labels are
correct and that the true class-conditional distributions are "mutually
irreducible," a concept we introduce that limits the similarity of the two
distributions. For any label noise problem, there is a unique pair of true
class-conditional distributions satisfying the proposed conditions, and we
argue that this pair corresponds in a certain sense to maximal denoising of the
observed distributions.
Our results are facilitated by a connection to "mixture proportion
estimation," which is the problem of estimating the maximal proportion of one
distribution that is present in another. We establish a novel rate of
convergence result for mixture proportion estimation, and apply this to obtain
consistency of a discrimination rule based on surrogate loss minimization.
Experimental results on benchmark data and a nuclear particle classification
problem demonstrate the efficacy of our approach
Locally adaptive image denoising by a statistical multiresolution criterion
We demonstrate how one can choose the smoothing parameter in image denoising
by a statistical multiresolution criterion, both globally and locally. Using
inhomogeneous diffusion and total variation regularization as examples for
localized regularization schemes, we present an efficient method for locally
adaptive image denoising. As expected, the smoothing parameter serves as an
edge detector in this framework. Numerical examples illustrate the usefulness
of our approach. We also present an application in confocal microscopy
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
Generalized SURE for Exponential Families: Applications to Regularization
Stein's unbiased risk estimate (SURE) was proposed by Stein for the
independent, identically distributed (iid) Gaussian model in order to derive
estimates that dominate least-squares (LS). In recent years, the SURE criterion
has been employed in a variety of denoising problems for choosing
regularization parameters that minimize an estimate of the mean-squared error
(MSE). However, its use has been limited to the iid case which precludes many
important applications. In this paper we begin by deriving a SURE counterpart
for general, not necessarily iid distributions from the exponential family.
This enables extending the SURE design technique to a much broader class of
problems. Based on this generalization we suggest a new method for choosing
regularization parameters in penalized LS estimators. We then demonstrate its
superior performance over the conventional generalized cross validation
approach and the discrepancy method in the context of image deblurring and
deconvolution. The SURE technique can also be used to design estimates without
predefining their structure. However, allowing for too many free parameters
impairs the performance of the resulting estimates. To address this inherent
tradeoff we propose a regularized SURE objective. Based on this design
criterion, we derive a wavelet denoising strategy that is similar in sprit to
the standard soft-threshold approach but can lead to improved MSE performance.Comment: to appear in the IEEE Transactions on Signal Processin
Poisson noise reduction with non-local PCA
Photon-limited imaging arises when the number of photons collected by a
sensor array is small relative to the number of detector elements. Photon
limitations are an important concern for many applications such as spectral
imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson
distribution is used to model these observations, and the inherent
heteroscedasticity of the data combined with standard noise removal methods
yields significant artifacts. This paper introduces a novel denoising algorithm
for photon-limited images which combines elements of dictionary learning and
sparse patch-based representations of images. The method employs both an
adaptation of Principal Component Analysis (PCA) for Poisson noise and recently
developed sparsity-regularized convex optimization algorithms for
photon-limited images. A comprehensive empirical evaluation of the proposed
method helps characterize the performance of this approach relative to other
state-of-the-art denoising methods. The results reveal that, despite its
conceptual simplicity, Poisson PCA-based denoising appears to be highly
competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio
Rehaussement du signal de parole par EMD et opérateur de Teager-Kaiser
The authors would like to thank Professor Mohamed Bahoura from Universite de Quebec a Rimouski for fruitful discussions on time adaptive thresholdingIn this paper a speech denoising strategy based on time adaptive thresholding of intrinsic modes functions (IMFs) of the signal, extracted by empirical mode decomposition (EMD), is introduced. The denoised signal is reconstructed by the superposition of its adaptive thresholded IMFs. Adaptive thresholds are estimated using the Teager–Kaiser energy operator (TKEO) of signal IMFs. More precisely, TKEO identifies the type of frame by expanding differences between speech and non-speech frames in each IMF. Based on the EMD, the proposed speech denoising scheme is a fully data-driven approach. The method is tested on speech signals with different noise levels and the results are compared to EMD-shrinkage and wavelet transform (WT) coupled with TKEO. Speech enhancement performance is evaluated using output signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ) measure. Based on the analyzed speech signals, the proposed enhancement scheme performs better than WT-TKEO and EMD-shrinkage approaches in terms of output SNR and PESQ. The noise is greatly reduced using time-adaptive thresholding than universal thresholding. The study is limited to signals corrupted by additive white Gaussian noise
- …