8,122 research outputs found

    Separable Cosparse Analysis Operator Learning

    Get PDF
    The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multidimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.Comment: 5 pages, 3 figures, accepted at EUSIPCO 201

    Sampling systems matched to input processes and image classes

    Get PDF
    This dissertation investigates sampling and reconstruction of wide sense stationary (WSS) random processes from their sample random variables . In this context, two types of sampling systems are studied, namely, interpolation and approximation sampling systems. We aim to determine the properties of the filters in these systems that minimize the mean squared error between the input process and the process reconstructed from its samples. More specifically, for the interpolation sampling system we seek and obtain a closed form expression for an interpolation filter that is optimal in this sense. Likewise, for the approximation sampling system we derive a closed form expression for an optimal reconstruction filter given the statistics of the input process and the antialiasing filter. Using these expressions we show that Meyer-type scaling functions and wavelets arise naturally in the context of subsampled bandlimited processes. We also derive closed form expressions for the mean squared error incurred by both the sampling systems. Using the expression for mean squared error we show that for an approximation sampling system, minimum mean squared error is obtained when the antialiasing filter and the reconstruction filter are spectral factors of an ideal brickwall-type filter. Similar results are derived for the discrete-time equivalents of these sampling systems. Finally, we give examples of interpolation and approximation sampling filters and compare their performance with that of some standard filters. The implementation of these systems is based on a novel framework called the perfect reconstruction circular convolution (PRCC) filter bank framework. The results obtained for the one dimensional case are extended to the multidimensional case. Sampling a multidimensional random field or image class has a greater degree of freedom and the sampling lattice can be defined by a nonsingular matrix D. The aim is to find optimal filters in multidimensional sampling systems to reconstruct the input image class from its samples on a lattice defined by D. Closed form expressions for filters in multidimensional interpolation and approximation sampling systems are obtained as are expressions for the mean squared error incurred by each system. For the approximation sampling system it is proved that the antialiasing and reconstruction filters that minimize the mean squared error are spectral factors of an ideal brickwall-type filter whose support depends on the sampling matrix D. Finally. we give examples of filters in the interpolation and approximation sampling systems for an image class derived from a LANDSAT image and a quincunx sampling lattice. The performance of these filters is compared with that of some standard filters in the presence of a quantizer

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    Sampling and Reconstruction of Sparse Signals on Circulant Graphs - An Introduction to Graph-FRI

    Full text link
    With the objective of employing graphs toward a more generalized theory of signal processing, we present a novel sampling framework for (wavelet-)sparse signals defined on circulant graphs which extends basic properties of Finite Rate of Innovation (FRI) theory to the graph domain, and can be applied to arbitrary graphs via suitable approximation schemes. At its core, the introduced Graph-FRI-framework states that any K-sparse signal on the vertices of a circulant graph can be perfectly reconstructed from its dimensionality-reduced representation in the graph spectral domain, the Graph Fourier Transform (GFT), of minimum size 2K. By leveraging the recently developed theory of e-splines and e-spline wavelets on graphs, one can decompose this graph spectral transformation into the multiresolution low-pass filtering operation with a graph e-spline filter, and subsequent transformation to the spectral graph domain; this allows to infer a distinct sampling pattern, and, ultimately, the structure of an associated coarsened graph, which preserves essential properties of the original, including circularity and, where applicable, the graph generating set.Comment: To appear in Appl. Comput. Harmon. Anal. (2017

    Filters involving derivatives with application to reconstruction from scanned halftone images

    Get PDF

    Results on lattice vector quantization with dithering

    Get PDF
    The statistical properties of the error in uniform scalar quantization have been analyzed by a number of authors in the past, and is a well-understood topic today. The analysis has also been extended to the case of dithered quantizers, and the advantages and limitations of dithering have been studied and well documented in the literature. Lattice vector quantization is a natural extension into multiple dimensions of the uniform scalar quantization. Accordingly, there is a natural extension of the analysis of the quantization error. It is the purpose of this paper to present this extension and to elaborate on some of the new aspects that come with multiple dimensions. We show that, analogous to the one-dimensional case, the quantization error vector can be rendered independent of the input in subtractive vector-dithering. In this case, the total mean square error is a function of only the underlying lattice and there are lattices that minimize this error. We give a necessary condition on such lattices. In nonsubtractive vector dithering, we show how to render moments of the error vector independent of the input by using appropriate dither random vectors. These results can readily be applied for the case of wide sense stationary (WSS) vector random processes, by use of iid dither sequences. We consider the problem of pre- and post-filtering around a dithered lattice quantifier, and show how these filters should be designed in order to minimize the overall quantization error in the mean square sense. For the special case where the WSS vector process is obtained by blocking a WSS scalar process, the optimum prefilter matrix reduces to the blocked version of the well-known scalar half-whitening filter
    corecore