1,855 research outputs found

    Scale-discretised ridgelet transform on the sphere

    Get PDF
    We revisit the spherical Radon transform, also called the Funk-Radon transform, viewing it as an axisymmetric convolution on the sphere. Viewing the spherical Radon transform in this manner leads to a straightforward derivation of its spherical harmonic representation, from which we show the spherical Radon transform can be inverted exactly for signals exhibiting antipodal symmetry. We then construct a spherical ridgelet transform by composing the spherical Radon and scale-discretised wavelet transforms on the sphere. The resulting spherical ridgelet transform also admits exact inversion for antipodal signals. The restriction to antipodal signals is expected since the spherical Radon and ridgelet transforms themselves result in signals that exhibit antipodal symmetry. Our ridgelet transform is defined natively on the sphere, probes signal content globally along great circles, does not exhibit blocking artefacts, supports spin signals and exhibits an exact and explicit inverse transform. No alternative ridgelet construction on the sphere satisfies all of these properties. Our implementation of the spherical Radon and ridgelet transforms is made publicly available. Finally, we illustrate the effectiveness of spherical ridgelets for diffusion magnetic resonance imaging of white matter fibers in the brain.Comment: 5 pages, 4 figures, matches version accepted by EUSIPCO, code available at http://www.s2let.or

    The application of compressive sampling to radio astronomy I: Deconvolution

    Full text link
    Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.Comment: Published by A&A, Matlab code can be found: http://code.google.com/p/csra/download

    Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions

    Get PDF
    We develop a robust uncertainty principle for finite signals in C^N which states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier transform is supported on W. In fact, we can make the above uncertainty principle quantitative in the sense that if f is supported on T, then only a small percentage of the energy (less than half, say) of its Fourier transform is concentrated on W. As an application of this robust uncertainty principle (QRUP), we consider the problem of decomposing a signal into a sparse superposition of spikes and complex sinusoids. We show that if a generic signal f has a decomposition using spike and frequency locations in T and W respectively, and obeying |T| + |W| <= C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition (all other decompositions have more non-zero terms). In addition, if |T| + |W| <= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving a convex optimization problem.Comment: 25 pages, 9 figure

    The Dantzig selector: Statistical estimation when pp is much larger than nn

    Get PDF
    In many important statistical applications, the number of variables or parameters pp is much larger than the number of observations nn. Suppose then that we have observations y=Xβ+zy=X\beta+z, where βRp\beta\in\mathbf{R}^p is a parameter vector of interest, XX is a data matrix with possibly far fewer rows than columns, npn\ll p, and the ziz_i's are i.i.d. N(0,σ2)N(0,\sigma^2). Is it possible to estimate β\beta reliably based on the noisy data yy? To estimate β\beta, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the 1\ell_1-regularization problem \min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where rr is the residual vector yXβ~y-X\tilde{\beta} and tt is a positive scalar. We show that if XX obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector β\beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, β^β22C22logp(σ2+imin(βi2,σ2)).\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr). Our results are nonasymptotic and we give values for the constant CC. Even though nn may be much smaller than pp, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Decoding by Linear Programming

    Get PDF
    This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector fRnf \in \R^n from corrupted measurements y=Af+ey = A f + e. Here, AA is an mm by nn (coding) matrix and ee is an arbitrary and unknown vector of errors. Is it possible to recover ff exactly from the data yy? We prove that under suitable conditions on the coding matrix AA, the input ff is the unique solution to the 1\ell_1-minimization problem (x1:=ixi\|x\|_{\ell_1} := \sum_i |x_i|) mingRnyAg1 \min_{g \in \R^n} \| y - Ag \|_{\ell_1} provided that the support of the vector of errors is not too large, e0:={i:ei0}ρm\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le \rho \cdot m for some ρ>0\rho > 0. In short, ff can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; ff is recovered exactly even in situations where a significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte

    Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

    Get PDF
    Suppose we are given a vector ff in RN\R^N. How many linear measurements do we need to make about ff to be able to recover ff to within precision ϵ\epsilon in the Euclidean (2\ell_2) metric? Or more exactly, suppose we are interested in a class F{\cal F} of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ϵ\epsilon? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal fFf \in {\cal F} decay like a power-law (or if the coefficient sequence of ff in a fixed basis decays like a power-law), then it is possible to reconstruct ff to within very high accuracy from a small number of random measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been corrected; other expository and bibliographical changes made, incorporating referee's suggestion
    corecore