6,338 research outputs found

    Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Get PDF
    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity

    Nearfield Acoustic Holography using sparsity and compressive sampling principles

    Get PDF
    Regularization of the inverse problem is a complex issue when using Near-field Acoustic Holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, new regularization schemes can be developed, based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e. the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these new techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of Compressive Sampling: under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.Comment: Journal of the Acoustical Society of America (2012

    MULTIPLE DICTIONARY FOR SPARSE MODELING

    Get PDF
    Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. Sparse coding is a key principle that underlies wavelet representation of images. Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. In general, the choice of a proper dictionary can be done using one of two ways: i) building asparsifying  dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set

    Sparse and Nonnegative Factorizations For Music Understanding

    Get PDF
    In this dissertation, we propose methods for sparse and nonnegative factorization that are specifically suited for analyzing musical signals. First, we discuss two constraints that aid factorization of musical signals: harmonic and co-occurrence constraints. We propose a novel dictionary learning method that imposes harmonic constraints upon the atoms of the learned dictionary while allowing the dictionary size to grow appropriately during the learning procedure. When there is significant spectral-temporal overlap among the musical sources, our method outperforms popular existing matrix factorization methods as measured by the recall and precision of learned dictionary atoms. We also propose co-occurrence constraints -- three simple and convenient multiplicative update rules for nonnegative matrix factorization (NMF) that enforce dependence among atoms. Using examples in music transcription, we demonstrate the ability of these updates to represent each musical note with multiple atoms and cluster the atoms for source separation purposes. Second, we study how spectral and temporal information extracted by nonnegative factorizations can improve upon musical instrument recognition. Musical instrument recognition in melodic signals is difficult, especially for classification systems that rely entirely upon spectral information instead of temporal information. Here, we propose a simple and effective method of combining spectral and temporal information for instrument recognition. While existing classification methods use traditional features such as statistical moments, we extract novel features from spectral and temporal atoms generated by NMF using a biologically motivated multiresolution gamma filterbank. Unlike other methods that require thresholds, safeguards, and hierarchies, the proposed spectral-temporal method requires only simple filtering and a flat classifier. Finally, we study how to perform sparse factorization when a large dictionary of musical atoms is already known. Sparse coding methods such as matching pursuit (MP) have been applied to problems in music information retrieval such as transcription and source separation with moderate success. However, when the set of dictionary atoms is large, identification of the best match in the dictionary with the residual is slow -- linear in the size of the dictionary. Here, we propose a variant called approximate matching pursuit (AMP) that is faster than MP while maintaining scalability and accuracy. Unlike MP, AMP uses an approximate nearest-neighbor (ANN) algorithm to find the closest match in a dictionary in sublinear time. One such ANN algorithm, locality-sensitive hashing (LSH), is a probabilistic hash algorithm that places similar, yet not identical, observations into the same bin. While the accuracy of AMP is comparable to similar MP methods, the computational complexity is reduced. Also, by using LSH, this method scales easily; the dictionary can be expanded without reorganizing any data structures

    Consensus Matching Pursuit of Multi-Trial Biosignals, with Application to Brain Signals

    Get PDF
    submittedTime-frequency representations are commonly used to analyze the oscillatory nature of bioelectromagnetic signals. There is a growing interest in sparse representations, where the data is described using few components. In this study, we adapt the Matching Pursuit of Mallat and Zhang for biosignals consisting of a series of variations around a similar pattern, with emphasis on multi-trial datasets encountered in MEG and EEG. The general principle of Matching Pursuit (MP) is to iteratively subtract from the signal its projection on the atom selected from a dictionary. The originality of our method is to select each atom using a voting technique that is robust to variability, and to subtract it by adapting the parameters to each trial. Because it is designed to handle inter-trial variability using a voting technique, the method is called Consensus Matching Pursuit (CMP). The method is validated on both simplified and realistic simulations, and on two real datasets (intracerebral EEG and scalp EEG ).We also compare our method to two other multi-trial MP algorithms: Multivariate MP (MMP) and Induced activity MP (IMP). CMP is shown to be able to sparsely reveal the structure present in the data, and to be robust to variability (jitter) across trials

    Intelligent sequence stratigraphy through a wavelet-based decomposition of well log data

    Get PDF
    Identification of sequence boundaries is an important task in geological characterization of gas reservoirs. In this study, a continuous wavelet transform (CWT) approach is applied to decompose gamma ray and porosity logs into a set of wavelet coefficients at varying scales. A discrete wavelet transform (DWT) is utilized to decompose well logs into smaller frequency bandwidths called Approximations (A) and Details (D). The methodology is illustrated by using a case study from the Ilam and upper Sarvak formations in the Dezful embayment, southwestern Iran. Different graphical visualization techniques of the continuous wavelet transform results allowed a better understanding of the main sequence boundaries. Using the DWT, maximum flooding surface was successfully recognised from both highest frequency and low frequency contents of signals. There is a sharp peak in all A&D corresponding to the maximum flooding surface (MFS), which can specifically be seen in fifth Approximation (a5), fifth Detail (d5), fourth Detail (d4) and third Detail (d3) coefficients. Sequence boundaries were best recognised from the low frequency contents of signals, especially the fifth Approximation (a5). Normally, the troughs of the fifth Approximation correspond to sequence boundaries where higher porosities developed in the Ilam and upper Sarvak carbonate rocks. Through hybridizing both CWT and DWT coefficient a more effective discrimination of sequence boundaries was achieved. The results of this study show that wavelet transform is a successful, fast and easy approach for identification of the main sequence boundaries from well log data. There is a good agreement between core derived system tracts and those derived from decomposition of well logs by using the wavelet transform approach

    Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic Cost Model

    Full text link
    Best-first search has been recently utilized for compressed sensing (CS) by the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we concentrate on theoretical and empirical analyses of A*OMP. We present a restricted isometry property (RIP) based general condition for exact recovery of sparse signals via A*OMP. In addition, we develop online guarantees which promise improved recovery performance with the residue-based termination instead of the sparsity-based one. We demonstrate the recovery capabilities of A*OMP with extensive recovery simulations using the adaptive-multiplicative (AMul) cost model, which effectively compensates for the path length differences in the search tree. The presented results, involving phase transitions for different nonzero element distributions as well as recovery rates and average error, reveal not only the superior recovery accuracy of A*OMP, but also the improvements with the residue-based termination and the AMul cost model. Comparison of the run times indicate the speed up by the AMul cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the search further. Finally, we run A*OMP on a sparse image to illustrate its recovery performance for more realistic coefcient distributions
    • …
    corecore