4,888 research outputs found
Sparse Recovery for Orthogonal Polynomial Transforms
In this paper we consider the following sparse recovery problem. We have query access to a vector ? ? ?^N such that x? = ? ? is k-sparse (or nearly k-sparse) for some orthogonal transform ?. The goal is to output an approximation (in an ?? sense) to x? in sublinear time. This problem has been well-studied in the special case that ? is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ? polylog N). However, for transforms ? other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled.
In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms ? that arise from orthogonal polynomials. More precisely, our algorithm works for any ? that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability.
Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box
Sampling and Super-resolution of Sparse Signals Beyond the Fourier Domain
Recovering a sparse signal from its low-pass projections in the Fourier
domain is a problem of broad interest in science and engineering and is
commonly referred to as super-resolution. In many cases, however, Fourier
domain may not be the natural choice. For example, in holography, low-pass
projections of sparse signals are obtained in the Fresnel domain. Similarly,
time-varying system identification relies on low-pass projections on the space
of linear frequency modulated signals. In this paper, we study the recovery of
sparse signals from low-pass projections in the Special Affine Fourier
Transform domain (SAFT). The SAFT parametrically generalizes a number of well
known unitary transformations that are used in signal processing and optics. In
analogy to the Shannon's sampling framework, we specify sampling theorems for
recovery of sparse signals considering three specific cases: (1) sampling with
arbitrary, bandlimited kernels, (2) sampling with smooth, time-limited kernels
and, (3) recovery from Gabor transform measurements linked with the SAFT
domain. Our work offers a unifying perspective on the sparse sampling problem
which is compatible with the Fourier, Fresnel and Fractional Fourier domain
based results. In deriving our results, we introduce the SAFT series (analogous
to the Fourier series) and the short time SAFT, and study convolution theorems
that establish a convolution--multiplication property in the SAFT domain.Comment: 42 pages, 3 figures, manuscript under revie
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
- …