1,345 research outputs found
Empirical recovery performance of fourier-based deterministic compressed sensing
Compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements. Mathematically, measuring an N-dimensional signal..
Gaussian Process Modelling for Improved Resolution in Faraday Depth Reconstruction
The incomplete sampling of data in complex polarization measurements from
radio telescopes negatively affects both the rotation measure (RM) transfer
function and the Faraday depth spectra derived from these data. Such gaps in
polarization data are mostly caused by flagging of radio frequency interference
and their effects worsen as the percentage of missing data increases. In this
paper we present a novel method for inferring missing polarization data based
on Gaussian processes (GPs). Gaussian processes are stochastic processes that
enable us to encode prior knowledge in our models. They also provide a
comprehensive way of incorporating and quantifying uncertainties in regression
modelling. In addition to providing non-parametric model estimates for missing
values, we also demonstrate that Gaussian process modelling can be used for
recovering rotation measure values directly from complex polarization data, and
that inferring missing polarization data using this probabilistic method
improves the resolution of reconstructed Faraday depth spectra.Comment: 16 pages, 10 figures, submitted to MNRA
Wavelets, ridgelets and curvelets on the sphere
We present in this paper new multiscale transforms on the sphere, namely the
isotropic undecimated wavelet transform, the pyramidal wavelet transform, the
ridgelet transform and the curvelet transform. All of these transforms can be
inverted i.e. we can exactly reconstruct the original data from its
coefficients in either representation. Several applications are described. We
show how these transforms can be used in denoising and especially in a Combined
Filtering Method, which uses both the wavelet and the curvelet transforms, thus
benefiting from the advantages of both transforms. An application to component
separation from multichannel data mapped to the sphere is also described in
which we take advantage of moving to a wavelet representation.Comment: Accepted for publication in A&A. Manuscript with all figures can be
downloaded at http://jstarck.free.fr/aa_sphere05.pd
The Dantzig selector: Statistical estimation when is much larger than
In many important statistical applications, the number of variables or
parameters is much larger than the number of observations . Suppose then
that we have observations , where is a
parameter vector of interest, is a data matrix with possibly far fewer rows
than columns, , and the 's are i.i.d. . Is it
possible to estimate reliably based on the noisy data ? To estimate
, we introduce a new estimator--we call it the Dantzig selector--which
is a solution to the -regularization problem \min_{\tilde{\b
eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad
\|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where is
the residual vector and is a positive scalar. We show
that if obeys a uniform uncertainty principle (with unit-normed columns)
and if the true parameter vector is sufficiently sparse (which here
roughly guarantees that the model is identifiable), then with very large
probability, Our results are
nonasymptotic and we give values for the constant . Even though may be
much smaller than , our estimator achieves a loss within a logarithmic
factor of the ideal mean squared error one would achieve with an oracle which
would supply perfect information about which coordinates are nonzero, and which
were above the noise level. In multivariate regression and from a model
selection viewpoint, our result says that it is possible nearly to select the
best subset of variables by solving a very simple convex program, which, in
fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126],
[arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135].
Rejoinder in [arXiv:0803.3136]. Published in at
http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
- …