2,777 research outputs found

    Sparse Fast Trigonometric Transforms

    Get PDF
    Trigonometric transforms like the Fourier transform or the discrete cosine transform (DCT) are of immense importance in signal and image processing, physics, engineering, and data processing. The research of past decades has provided us with runtime optimal algorithms for these transforms. Significant runtime improvements are only possible if there is additional a priori information about the sparsity of the signal. In the first part of this thesis we develop sublinear algorithms for the fast Fourier transform for frequency sparse periodic functions. We investigate three classes of sparsity: short frequency support, polynomially structured sparsity and block sparsity. For all three classes we present new deterministic, sublinear algorithms that recover the Fourier coefficients of periodic functions from samples. We prove theoretical runtime and sampling bounds for all algorithms and also investigate their performance in numerical experiments. In the second part of this thesis we focus on the reconstruction of vectors with short support from their DCT of type II. We present two different new, deterministic and sublinear algorithms for this problem. The first method is based on inverse discrete Fourier transforms and uses complex arithmetic, whereas the second one utilizes properties of the DCT and only employs real arithmetic. We show theoretical runtime and sampling bounds for both algorithms and compare them numerically in experiments. Furthermore, we generalize the techniques for recovering vectors with short support from their DCT of type II using only real arithmetic to the two-dimensional setting of recovering matrices with block support, also providing theoretical runtime and sampling complexities for the obtained new two-dimensional algorithm

    Sparse Representation of Astronomical Images

    Get PDF
    Sparse representation of astronomical images is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm: i)Effectiveness at producing sparse representations. ii)Competitiveness, with respect to the time required to process large images.The latter is a consequence of the suitability of the proposed dictionaries for approximating images in partitions of small blocks.This feature makes it possible to apply the effective greedy selection technique Orthogonal Matching Pursuit, up to some block size. For blocks exceeding that size a refinement of the original Matching Pursuit approach is considered. The resulting method is termed Self Projected Matching Pursuit, because is shown to be effective for implementing, via Matching Pursuit itself, the optional back-projection intermediate steps in that approach.Comment: Software to implement the approach is available on http://www.nonlinear-approx.info/examples/node1.htm

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Modulated Unit-Norm Tight Frames for Compressed Sensing

    Full text link
    In this paper, we propose a compressed sensing (CS) framework that consists of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a column-wise orthonormal matrix. We prove that this structure satisfies the restricted isometry property (RIP) with high probability if the number of measurements m=O(slog⁥2slog⁥2n)m = O(s \log^2s \log^2n) for ss-sparse signals of length nn and if the column-wise orthonormal matrix is bounded. Some existing structured sensing models can be studied under this framework, which then gives tighter bounds on the required number of measurements to satisfy the RIP. More importantly, we propose several structured sensing models by appealing to this unified framework, such as a general sensing model with arbitrary/determinisic subsamplers, a fast and efficient block compressed sensing scheme, and structured sensing matrices with deterministic phase modulations, all of which can lead to improvements on practical applications. In particular, one of the constructions is applied to simplify the transceiver design of CS-based channel estimation for orthogonal frequency division multiplexing (OFDM) systems.Comment: submitted to IEEE Transactions on Signal Processin

    Sparsity and `Something Else': An Approach to Encrypted Image Folding

    Get PDF
    A property of sparse representations in relation to their capacity for information storage is discussed. It is shown that this feature can be used for an application that we term Encrypted Image Folding. The proposed procedure is realizable through any suitable transformation. In particular, in this paper we illustrate the approach by recourse to the Discrete Cosine Transform and a combination of redundant Cosine and Dirac dictionaries. The main advantage of the proposed technique is that both storage and encryption can be achieved simultaneously using simple processing steps.Comment: Revised manuscript- Software for implementing the Encrypted Image Folding proposed in this paper is available on http://www.nonlinear-approx.info

    Flexible Multi-layer Sparse Approximations of Matrices and Applications

    Get PDF
    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation
    • 

    corecore