5,557 research outputs found

    Imaging via Compressive Sampling [Introduction to compressive sampling and recovery via convex programming]

    Get PDF
    There is an extensive body of literature on image compression, but the central concept is straightforward: we transform the image into an appropriate basis and then code only the important expansion coefficients. The crux is finding a good transform, a problem that has been studied extensively from both a theoretical [14] and practical [25] standpoint. The most notable product of this research is the wavelet transform [9], [16]; switching from sinusoid-based representations to wavelets marked a watershed in image compression and is the essential difference between the classical JPEG [18] and modern JPEG-2000 [22] standards. Image compression algorithms convert high-resolution images into a relatively small bit streams (while keeping the essential features intact), in effect turning a large digital data set into a substantially smaller one. But is there a way to avoid the large digital data set to begin with? Is there a way we can build the data compression directly into the acquisition? The answer is yes, and is what compressive sampling (CS) is all about

    Convex Cardinal Shape Composition

    Full text link
    We propose a new shape-based modeling technique for applications in imaging problems. Given a collection of shape priors (a shape dictionary), we define our problem as choosing the right dictionary elements and geometrically composing them through basic set operations to characterize desired regions in an image. This is a combinatorial problem solving which requires an exhaustive search among a large number of possibilities. We propose a convex relaxation to the problem to make it computationally tractable. We take some major steps towards the analysis of the proposed convex program and characterizing its minimizers. Applications vary from shape-based characterization, object tracking, optical character recognition, and shape recovery in occlusion, to other disciplines such as the geometric packing problem

    Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions

    Get PDF
    We develop a robust uncertainty principle for finite signals in C^N which states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier transform is supported on W. In fact, we can make the above uncertainty principle quantitative in the sense that if f is supported on T, then only a small percentage of the energy (less than half, say) of its Fourier transform is concentrated on W. As an application of this robust uncertainty principle (QRUP), we consider the problem of decomposing a signal into a sparse superposition of spikes and complex sinusoids. We show that if a generic signal f has a decomposition using spike and frequency locations in T and W respectively, and obeying |T| + |W| <= C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition (all other decompositions have more non-zero terms). In addition, if |T| + |W| <= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving a convex optimization problem.Comment: 25 pages, 9 figure

    Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information

    Full text link
    This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f \in \C^N and a randomly chosen set of frequencies Ω\Omega of mean size τN\tau N. Is it possible to reconstruct ff from the partial knowledge of its Fourier coefficients on the set Ω\Omega? A typical result of this paper is as follows: for each M>0M > 0, suppose that ff obeys # \{t, f(t) \neq 0 \} \le \alpha(M) \cdot (\log N)^{-1} \cdot # \Omega, then with probability at least 1O(NM)1-O(N^{-M}), ff can be reconstructed exactly as the solution to the 1\ell_1 minimization problem mingt=0N1g(t),s.t.g^(ω)=f^(ω)for allωΩ. \min_g \sum_{t = 0}^{N-1} |g(t)|, \quad \text{s.t.} \hat g(\omega) = \hat f(\omega) \text{for all} \omega \in \Omega. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for α\alpha which depends on the desired probability of success; except for the logarithmic factor, the condition on the size of the support is sharp. The methodology extends to a variety of other setups and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one or two-dimensional) object from incomplete frequency samples--provided that the number of jumps (discontinuities) obeys the condition above--by minimizing other convex functionals such as the total-variation of ff
    corecore