2,189 research outputs found
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Recently, impressive denoising results have been achieved by Bayesian
approaches which assume Gaussian models for the image patches. This improvement
in performance can be attributed to the use of per-patch models. Unfortunately
such an approach is particularly unstable for most inverse problems beyond
denoising. In this work, we propose the use of a hyperprior to model image
patches, in order to stabilize the estimation procedure. There are two main
advantages to the proposed restoration scheme: Firstly it is adapted to
diagonal degradation matrices, and in particular to missing data problems (e.g.
inpainting of missing pixels or zooming). Secondly it can deal with signal
dependent noise models, particularly suited to digital cameras. As such, the
scheme is especially adapted to computational photography. In order to
illustrate this point, we provide an application to high dynamic range imaging
from a single image taken with a modified sensor, which shows the effectiveness
of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints.
Full size images are available as HAL technical report hal-01107519v5, IEEE
Transactions on Computational Imaging, 201
An MDL framework for sparse coding and dictionary learning
The power of sparse signal modeling with learned over-complete dictionaries
has been demonstrated in a variety of applications and fields, from signal
processing to statistical inference and machine learning. However, the
statistical properties of these models, such as under-fitting or over-fitting
given sets of data, are still not well characterized in the literature. As a
result, the success of sparse modeling depends on hand-tuning critical
parameters for each data and application. This work aims at addressing this by
providing a practical and objective characterization of sparse models by means
of the Minimum Description Length (MDL) principle -- a well established
information-theoretic approach to model selection in statistical inference. The
resulting framework derives a family of efficient sparse coding and dictionary
learning algorithms which, by virtue of the MDL principle, are completely
parameter free. Furthermore, such framework allows to incorporate additional
prior information to existing models, such as Markovian dependencies, or to
define completely new problem formulations, including in the matrix analysis
area, in a natural way. These virtues will be demonstrated with parameter-free
algorithms for the classic image denoising and classification problems, and for
low-rank matrix recovery in video applications
Detail-preserving and Content-aware Variational Multi-view Stereo Reconstruction
Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view
images is a fundamental yet active research area in computer vision. Despite
the steady progress in multi-view stereo reconstruction, most existing methods
are still limited in recovering fine-scale details and sharp features while
suppressing noises, and may fail in reconstructing regions with few textures.
To address these limitations, this paper presents a Detail-preserving and
Content-aware Variational (DCV) multi-view stereo method, which reconstructs
the 3D surface by alternating between reprojection error minimization and mesh
denoising. In reprojection error minimization, we propose a novel inter-image
similarity measure, which is effective to preserve fine-scale details of the
reconstructed surface and builds a connection between guided image filtering
and image registration. In mesh denoising, we propose a content-aware
-minimization algorithm by adaptively estimating the value and
regularization parameters based on the current input. It is much more promising
in suppressing noise while preserving sharp features than conventional
isotropic mesh smoothing. Experimental results on benchmark datasets
demonstrate that our DCV method is capable of recovering more surface details,
and obtains cleaner and more accurate reconstructions than state-of-the-art
methods. In particular, our method achieves the best results among all
published methods on the Middlebury dino ring and dino sparse ring datasets in
terms of both completeness and accuracy.Comment: 14 pages,16 figures. Submitted to IEEE Transaction on image
processin
MP-PCA denoising of fMRI time-series data can lead to artificial activation "spreading"
MP-PCA denoising has become the method of choice for denoising in MRI since
it provides an objective threshold to separate the desired signal from unwanted
thermal noise components. In rodents, thermal noise in the coils is an
important source of noise that can reduce the accuracy of activation mapping in
fMRI. Further confounding this problem, vendor data often contains zero-filling
and other effects that may violate MP-PCA assumptions. Here, we develop an
approach to denoise vendor data and assess activation "spreading" caused by
MP-PCA denoising in rodent task-based fMRI data. Data was obtained from N = 3
mice using conventional multislice and ultrafast acquisitions (1 s and 50 ms
temporal resolution, respectively), during visual stimulation. MP-PCA denoising
produced SNR gains of 64% and 39% and Fourier spectral amplitude (FSA)
increases in BOLD maps of 9% and 7% for multislice and ultrafast data,
respectively, when using a small [2 2] denoising window. Larger windows
provided higher SNR and FSA gains with increased spatial extent of activation
that may or may not represent real activation. Simulations showed that MP-PCA
denoising causes activation "spreading" with an increase in false positive rate
and smoother functional maps due to local "bleeding" of principal components,
and that the optimal denoising window for improved specificity of functional
mapping, based on Dice score calculations, depends on the data's tSNR and
functional CNR. This "spreading" effect applies also to another recently
proposed low-rank denoising method (NORDIC). Our results bode well for
dramatically enhancing spatial and/or temporal resolution in future fMRI work,
while taking into account the sensitivity/specificity trade-offs of low-rank
denoising methods
Variational Multiscale Nonparametric Regression: Algorithms and Implementation
Many modern statistically efficient methods come with tremendous
computational challenges, often leading to large-scale optimisation problems.
In this work, we examine such computational issues for recently developed
estimation methods in nonparametric regression with a specific view on image
denoising. We consider in particular certain variational multiscale estimators
which are statistically optimal in minimax sense, yet computationally
intensive. Such an estimator is computed as the minimiser of a smoothness
functional (e.g., TV norm) over the class of all estimators such that none of
its coefficients with respect to a given multiscale dictionary is statistically
significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can
incorporate any convex smoothness functional and combine it with a proper
dictionary including wavelets, curvelets and shearlets. The computation of MIND
in general requires to solve a high-dimensional constrained convex optimisation
problem with a specific structure of the constraints induced by the statistical
multiscale testing criterion. To solve this explicitly, we discuss three
different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth
Newton algorithms. Algorithmic details and an explicit implementation is
presented and the solutions are then compared numerically in a simulation study
and on various test images. We thereby recommend the Chambolle-Pock algorithm
in most cases for its fast convergence. We stress that our analysis can also be
transferred to signal recovery and other denoising problems to recover more
general objects whenever it is possible to borrow statistical strength from
data patches of similar object structure.Comment: Codes are available at https://github.com/housenli/MIN
- …