14 research outputs found

    Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

    Get PDF
    This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements -- L_1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of the Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of the L_1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity (in practice even logarithmic), and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.Comment: This is the final version of the paper, including referee suggestion

    Investigation of Sparsifying Transforms in Compressed Sensing for Magnetic Resonance Imaging with Fasttestcs

    Get PDF
    The goal of this contribution is to achieve higher reduction factors for faster Magnetic Resonance Imaging (MRI) scans with better Image Quality (IQ) by using Compressed Sensing (CS). This can be accomplished by adopting and understanding better sparsifying transforms for CS in MRI. There is a tremendous number of transforms and optional settings potentially available. Additionally, the amount of research in CS is growing, with possible duplication and difficult practical evaluation and comparison. However, no in-depth analysis of the effectiveness of different redundant sparsifying transforms on MRI images with CS has been undertaken until this work. New theoretical sparsity bounds for the dictionary restricted isometry property constants in CS are presented with mathematical proof. In order to verify the sparsifying transforms in this setting, the experiments focus on several redundant transforms contrasting them with orthogonal transforms. The transforms investigated are Wavelet (WT), Cosine (CT), contourlet, curvelet, k-means singular value decomposition, and Gabor. Several variations of these transforms with corresponding filter options are developed and tested in compression and CS simulations. Translation Invariance (TI) in transforms is found to be a key contributing factor in producing good IQ because any particular translation of the signal will not effect the transform representation. Some transforms tested here are TI and many others are made TI by transforming small overlapping image patches. These transforms are tested by comparing different under-sampling patterns and reduction ratios with varying image types including MRI data. Radial, spiral, and various random patterns are implemented and demonstrate that the TIWT is very robust across all under-sampling patterns. Results of the TIWT simulations show improvements in de-noising and artifact suppression over that of individual orthogonal wavelets and total variation ell-1 minimization in CS simulations. Some of these transforms add considerable time to the CS simulations and prohibit extensive testing of large 3D MRI datasets. Therefore, the FastTestCS software simulation framework is developed and customized for testing images, under-samping patterns and sparsifying transforms. This novel software is offered as a practical, robust, universal framework for evaluating and developing simulations in order to quickly test sparsifying transforms for CS MRI

    Représentations parcimonieuses pour les signaux multivariés

    Get PDF
    Dans cette thèse, nous étudions les méthodes d'approximation et d'apprentissage qui fournissent des représentations parcimonieuses. Ces méthodes permettent d'analyser des bases de données très redondantes à l'aide de dictionnaires d'atomes appris. Etant adaptés aux données étudiées, ils sont plus performants en qualité de représentation que les dictionnaires classiques dont les atomes sont définis analytiquement. Nous considérons plus particulièrement des signaux multivariés résultant de l'acquisition simultanée de plusieurs grandeurs, comme les signaux EEG ou les signaux de mouvements 2D et 3D. Nous étendons les méthodes de représentations parcimonieuses au modèle multivarié, pour prendre en compte les interactions entre les différentes composantes acquises simultanément. Ce modèle est plus flexible que l'habituel modèle multicanal qui impose une hypothèse de rang 1. Nous étudions des modèles de représentations invariantes : invariance par translation temporelle, invariance par rotation, etc. En ajoutant des degrés de liberté supplémentaires, chaque noyau est potentiellement démultiplié en une famille d'atomes, translatés à tous les échantillons, tournés dans toutes les orientations, etc. Ainsi, un dictionnaire de noyaux invariants génère un dictionnaire d'atomes très redondant, et donc idéal pour représenter les données étudiées redondantes. Toutes ces invariances nécessitent la mise en place de méthodes adaptées à ces modèles. L'invariance par translation temporelle est une propriété incontournable pour l'étude de signaux temporels ayant une variabilité temporelle naturelle. Dans le cas de l'invariance par rotation 2D et 3D, nous constatons l'efficacité de l'approche non-orientée sur celle orientée, même dans le cas où les données ne sont pas tournées. En effet, le modèle non-orienté permet de détecter les invariants des données et assure la robustesse à la rotation quand les données tournent. Nous constatons aussi la reproductibilité des décompositions parcimonieuses sur un dictionnaire appris. Cette propriété générative s'explique par le fait que l'apprentissage de dictionnaire est une généralisation des K-means. D'autre part, nos représentations possèdent de nombreuses invariances, ce qui est idéal pour faire de la classification. Nous étudions donc comment effectuer une classification adaptée au modèle d'invariance par translation, en utilisant des fonctions de groupement consistantes par translation.In this thesis, we study approximation and learning methods which provide sparse representations. These methods allow to analyze very redundant data-bases thanks to learned atoms dictionaries. Being adapted to studied data, they are more efficient in representation quality than classical dictionaries with atoms defined analytically. We consider more particularly multivariate signals coming from the simultaneous acquisition of several quantities, as EEG signals or 2D and 3D motion signals. We extend sparse representation methods to the multivariate model, to take into account interactions between the different components acquired simultaneously. This model is more flexible that the common multichannel one which imposes a hypothesis of rank 1. We study models of invariant representations: invariance to temporal shift, invariance to rotation, etc. Adding supplementary degrees of freedom, each kernel is potentially replicated in an atoms family, translated at all samples, rotated at all orientations, etc. So, a dictionary of invariant kernels generates a very redundant atoms dictionary, thus ideal to represent the redundant studied data. All these invariances require methods adapted to these models. Temporal shift-invariance is an essential property for the study of temporal signals having a natural temporal variability. In the 2D and 3D rotation invariant case, we observe the efficiency of the non-oriented approach over the oriented one, even when data are not revolved. Indeed, the non-oriented model allows to detect data invariants and assures the robustness to rotation when data are revolved. We also observe the reproducibility of the sparse decompositions on a learned dictionary. This generative property is due to the fact that dictionary learning is a generalization of K-means. Moreover, our representations have many invariances that is ideal to make classification. We thus study how to perform a classification adapted to the shift-invariant model, using shift-consistent pooling functions.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Theory and Algorithms for Reliable Multimodal Data Analysis, Machine Learning, and Signal Processing

    Get PDF
    Modern engineering systems collect large volumes of data measurements across diverse sensing modalities. These measurements can naturally be arranged in higher-order arrays of scalars which are commonly referred to as tensors. Tucker decomposition (TD) is a standard method for tensor analysis with applications in diverse fields of science and engineering. Despite its success, TD exhibits severe sensitivity against outliers —i.e., heavily corrupted entries that appear sporadically in modern datasets. We study L1-norm TD (L1-TD), a reformulation of TD that promotes robustness. For 3-way tensors, we show, for the first time, that L1-TD admits an exact solution via combinatorial optimization and present algorithms for its solution. We propose two novel algorithmic frameworks for approximating the exact solution to L1-TD, for general N-way tensors. We propose a novel algorithm for dynamic L1-TD —i.e., efficient and joint analysis of streaming tensors. Principal-Component Analysis (PCA) (a special case of TD) is also outlier responsive. We consider Lp-quasinorm PCA (Lp-PCA) for

    Representative-based Big Data Processing in Communications and Machine Learning

    Get PDF
    The present doctoral dissertation focuses on representative-based processing proper for a big set of high-dimensional data. Compression and subset selection are considered as two main effective methods for representing a big set of data by a much smaller set of variables. Compressive sensing, matrix singular value decomposition, and tensor decomposition are employed as powerful mathematical tools to analyze the original data in terms of their representatives. Spectrum sensing is an important application of the developed theoretical analysis. In a cognitive radio network (CRN), primary users (PUs) coexist with secondary users (SUs). However, the secondary network aims to characterize PUs in order to establish a communication link without any interference with the primary network. A dynamic and efficient spectrum sensing framework is studied based on advanced algebraic tools. In a CRN, collecting information from all SUs is energy inefficient and computationally complex. A novel sensor selection algorithm based on the compressed sensing theory is devised which is compatible with the algebraic nature of the spectrum sensing problem. Moreover, some state-of-the-art applications in machine learning are investigated. One of the main contributions of the present dissertation is the introduction a versatile data selection algorithm which is referred as spectrum pursuit (SP). The goal of SP is to reduce a big set of data to a small-size subset such that the linear span of the selected data is as close as possible to all data. SP enjoys a low-complexity procedure which enables SP to be extended to more complex selection models. The kernel spectrum pursuit (KSP) facilitates selection from a union of non-linear manifolds. This dissertation investigates a number of important applications in machine learning including fast training of generative adversarial networks (GANs), graph-based label propagation, few shot classification, and fast subspace clustering

    Sparse Modeling of Grouped Line Spectra

    Get PDF
    This licentiate thesis focuses on clustered parametric models for estimation of line spectra, when the spectral content of a signal source is assumed to exhibit some form of grouping. Different from previous parametric approaches, which generally require explicit knowledge of the model orders, this thesis exploits sparse modeling, where the orders are implicitly chosen. For line spectra, the non-linear parametric model is approximated by a linear system, containing an overcomplete basis of candidate frequencies, called a dictionary, and a large set of linear response variables that selects and weights the components in the dictionary. Frequency estimates are obtained by solving a convex optimization program, where the sum of squared residuals is minimized. To discourage overfitting and to infer certain structure in the solution, different convex penalty functions are introduced into the optimization. The cost trade-off between fit and penalty is set by some user parameters, as to approximate the true number of spectral lines in the signal, which implies that the response variable will be sparse, i.e., have few non-zero elements. Thus, instead of explicit model orders, the orders are implicitly set by this trade-off. For grouped variables, the dictionary is customized, and appropriate convex penalties selected, so that the solution becomes group sparse, i.e., has few groups with non-zero variables. In an array of sensors, the specific time-delays and attenuations will depend on the source and sensor positions. By modeling this, one may estimate the location of a source. In this thesis, a novel joint location and grouped frequency estimator is proposed, which exploits sparse modeling for both spectral and spatial estimates, showing robustness against sources with overlapping frequency content. For audio signals, this thesis uses two different features for clustering. Pitch is a perceptual property of sound that may be described by the harmonic model, i.e., by a group of spectral lines at integer multiples of a fundamental frequency, which we estimate by exploiting a novel adaptive total variation penalty. The other feature, chroma, is a concept in musical theory, collecting pitches at powers of 2 from each other into groups. Using a chroma dictionary, together with appropriate group sparse penalties, we propose an automatic transcription of the chroma content of a signal

    Spatial and Temporal Image Prediction with Magnitude and Phase Representations

    Get PDF
    In this dissertation, I develop the theory and techniques for spatial and temporal image prediction with the magnitude and phase representation of the Complex Wavelet Transform (CWT) or the over-complete DCT to solve the problems of image inpainting and motion compensated inter-picture prediction. First, I develop the theory and algorithms of image reconstruction from the analytic magnitude or phase of the CWT. I prove the conditions under which a signal is uniquely specified by its analytic magnitude or phase, propose iterative algorithms for the reconstruction of a signal from its analytic CWT magnitude or phase, and analyze the convergence of the proposed algorithms. Image reconstruction from the magnitude and pseudo-phase of the over-complete DCT is also discussed and demonstrated. Second, I propose simple geometrical models of the CWT magnitude and phase to describe edges and structured textures and develop a spatial image prediction (inpainting) algorithm based on those models and the iterative image reconstruction mentioned above. Piecewise smooth signals, structured textures and their mixtures can be predicted successfully with the proposed algorithm. Simulation results show that the proposed algorithm achieves appealing visual quality with low computational complexity. Finally, I propose a novel temporal (inter-picture) image predictor for hybrid video coding. The proposed predictor enables successful predictive coding during fades, blended scenes, temporally decorrelated noise, and many other temporal evolutions that are beyond the capability of the traditional motion compensated prediction methods. The proposed predictor estimates the transform magnitude and phase of the desired motion compensated prediction by exploiting the temporal and spatial correlations of the transform coefficients. For the case of implementation in standard hybrid video coders, the over-complete DCT is chosen over the CWT. Better coding performance is achieved with the state-of-the-art H.264/AVC video encoder equipped with the proposed predictor. The proposed predictor is also successfully applied to image registration

    3D Object Recognition Based On Constrained 2D Views

    Get PDF
    The aim of the present work was to build a novel 3D object recognition system capable of classifying man-made and natural objects based on single 2D views. The approach to this problem has been one motivated by recent theories on biological vision and multiresolution analysis. The project's objectives were the implementation of a system that is able to deal with simple 3D scenes and constitutes an engineering solution to the problem of 3D object recognition, allowing the proposed recognition system to operate in a practically acceptable time frame. The developed system takes further the work on automatic classification of marine phytoplank- (ons, carried out at the Centre for Intelligent Systems, University of Plymouth. The thesis discusses the main theoretical issues that prompted the fundamental system design options. The principles and the implementation of the coarse data channels used in the system are described. A new multiresolution representation of 2D views is presented, which provides the classifier module of the system with coarse-coded descriptions of the scale-space distribution of potentially interesting features. A multiresolution analysis-based mechanism is proposed, which directs the system's attention towards potentially salient features. Unsupervised similarity-based feature grouping is introduced, which is used in coarse data channels to yield feature signatures that are not spatially coherent and provide the classifier module with salient descriptions of object views. A simple texture descriptor is described, which is based on properties of a special wavelet transform. The system has been tested on computer-generated and natural image data sets, in conditions where the inter-object similarity was monitored and quantitatively assessed by human subjects, or the analysed objects were very similar and their discrimination constituted a difficult task even for human experts. The validity of the above described approaches has been proven. The studies conducted with various statistical and artificial neural network-based classifiers have shown that the system is able to perform well in all of the above mentioned situations. These investigations also made possible to take further and generalise a number of important conclusions drawn during previous work carried out in the field of 2D shape (plankton) recognition, regarding the behaviour of multiple coarse data channels-based pattern recognition systems and various classifier architectures. The system possesses the ability of dealing with difficult field-collected images of objects and the techniques employed by its component modules make possible its extension to the domain of complex multiple-object 3D scene recognition. The system is expected to find immediate applicability in the field of marine biota classification

    An Unsupervised Approach to Modelling Visual Data

    Get PDF
    For very large visual datasets, producing expert ground-truth data for training supervised algorithms can represent a substantial human effort. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. The primary motivation for this thesis comes from the problem of labelling large visual datasets of the seafloor obtained by an Autonomous Underwater Vehicle (AUV) for ecological analysis. It is expensive to label this data, as taxonomical experts for the specific region are required, whereas automatically generated summaries can be used to focus the efforts of experts, and inform decisions on additional sampling. The contributions in this thesis arise from modelling this visual data in entirely unsupervised ways to obtain comprehensive visual summaries. Firstly, popular unsupervised image feature learning approaches are adapted to work with large datasets and unsupervised clustering algorithms. Next, using Bayesian models the performance of rudimentary scene clustering is boosted by sharing clusters between multiple related datasets, such as regular photo albums or AUV surveys. These Bayesian scene clustering models are extended to simultaneously cluster sub-image segments to form unsupervised notions of “objects” within scenes. The frequency distribution of these objects within scenes is used as the scene descriptor for simultaneous scene clustering. Finally, this simultaneous clustering model is extended to make use of whole image descriptors, which encode rudimentary spatial information, as well as object frequency distributions to describe scenes. This is achieved by unifying the previously presented Bayesian clustering models, and in so doing rectifies some of their weaknesses and limitations. Hence, the final contribution of this thesis is a practical unsupervised algorithm for modelling images from the super-pixel to album levels, and is applicable to large datasets
    corecore