1,309 research outputs found

    Empirical recovery performance of fourier-based deterministic compressed sensing

    Get PDF
    Compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements. Mathematically, measuring an N-dimensional signal..

    Gaussian Process Modelling for Improved Resolution in Faraday Depth Reconstruction

    Full text link
    The incomplete sampling of data in complex polarization measurements from radio telescopes negatively affects both the rotation measure (RM) transfer function and the Faraday depth spectra derived from these data. Such gaps in polarization data are mostly caused by flagging of radio frequency interference and their effects worsen as the percentage of missing data increases. In this paper we present a novel method for inferring missing polarization data based on Gaussian processes (GPs). Gaussian processes are stochastic processes that enable us to encode prior knowledge in our models. They also provide a comprehensive way of incorporating and quantifying uncertainties in regression modelling. In addition to providing non-parametric model estimates for missing values, we also demonstrate that Gaussian process modelling can be used for recovering rotation measure values directly from complex polarization data, and that inferring missing polarization data using this probabilistic method improves the resolution of reconstructed Faraday depth spectra.Comment: 16 pages, 10 figures, submitted to MNRA

    Wavelets, ridgelets and curvelets on the sphere

    Full text link
    We present in this paper new multiscale transforms on the sphere, namely the isotropic undecimated wavelet transform, the pyramidal wavelet transform, the ridgelet transform and the curvelet transform. All of these transforms can be inverted i.e. we can exactly reconstruct the original data from its coefficients in either representation. Several applications are described. We show how these transforms can be used in denoising and especially in a Combined Filtering Method, which uses both the wavelet and the curvelet transforms, thus benefiting from the advantages of both transforms. An application to component separation from multichannel data mapped to the sphere is also described in which we take advantage of moving to a wavelet representation.Comment: Accepted for publication in A&A. Manuscript with all figures can be downloaded at http://jstarck.free.fr/aa_sphere05.pd

    The Dantzig selector: Statistical estimation when pp is much larger than nn

    Get PDF
    In many important statistical applications, the number of variables or parameters pp is much larger than the number of observations nn. Suppose then that we have observations y=Xβ+zy=X\beta+z, where βRp\beta\in\mathbf{R}^p is a parameter vector of interest, XX is a data matrix with possibly far fewer rows than columns, npn\ll p, and the ziz_i's are i.i.d. N(0,σ2)N(0,\sigma^2). Is it possible to estimate β\beta reliably based on the noisy data yy? To estimate β\beta, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the 1\ell_1-regularization problem \min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where rr is the residual vector yXβ~y-X\tilde{\beta} and tt is a positive scalar. We show that if XX obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector β\beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, β^β22C22logp(σ2+imin(βi2,σ2)).\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr). Our results are nonasymptotic and we give values for the constant CC. Even though nn may be much smaller than pp, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore