6,338 research outputs found
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity
Nearfield Acoustic Holography using sparsity and compressive sampling principles
Regularization of the inverse problem is a complex issue when using
Near-field Acoustic Holography (NAH) techniques to identify the vibrating
sources. This paper shows that, for convex homogeneous plates with arbitrary
boundary conditions, new regularization schemes can be developed, based on the
sparsity of the normal velocity of the plate in a well-designed basis, i.e. the
possibility to approximate it as a weighted sum of few elementary basis
functions. In particular, these new techniques can handle discontinuities of
the velocity field at the boundaries, which can be problematic with standard
techniques. This comes at the cost of a higher computational complexity to
solve the associated optimization problem, though it remains easily tractable
with out-of-the-box software. Furthermore, this sparsity framework allows us to
take advantage of the concept of Compressive Sampling: under some conditions on
the sampling process (here, the design of a random array, which can be
numerically and experimentally validated), it is possible to reconstruct the
sparse signals with significantly less measurements (i.e., microphones) than
classically required. After introducing the different concepts, this paper
presents numerical and experimental results of NAH with two plate geometries,
and compares the advantages and limitations of these sparsity-based techniques
over standard Tikhonov regularization.Comment: Journal of the Acoustical Society of America (2012
MULTIPLE DICTIONARY FOR SPARSE MODELING
Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. Sparse coding is a key principle that underlies wavelet representation of images. Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. In general, the choice of a proper dictionary can be done using one of two ways: i) building asparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set
Sparse and Nonnegative Factorizations For Music Understanding
In this dissertation, we propose methods for sparse and nonnegative factorization that are specifically suited for analyzing musical signals. First, we discuss two constraints that aid factorization of musical signals: harmonic and co-occurrence constraints. We propose a novel dictionary learning method that imposes harmonic constraints upon the atoms of the learned dictionary while allowing the dictionary size to grow appropriately during the learning procedure. When there is significant spectral-temporal overlap among the musical sources, our method outperforms popular existing matrix factorization methods as measured by the recall and precision of learned dictionary atoms. We also propose co-occurrence constraints -- three simple and convenient multiplicative update rules for nonnegative matrix factorization (NMF) that enforce dependence among atoms. Using examples in music transcription, we demonstrate the ability of these updates to represent each musical note with multiple atoms and cluster the atoms for source separation purposes.
Second, we study how spectral and temporal information extracted by nonnegative factorizations can improve upon musical instrument recognition. Musical instrument recognition in melodic signals is difficult, especially for classification systems that rely entirely upon spectral information instead of temporal information. Here, we propose a simple and effective method of combining spectral and temporal information for instrument recognition. While existing classification methods use traditional features such as statistical moments, we extract novel features from spectral and temporal atoms generated by NMF using a biologically motivated multiresolution gamma filterbank. Unlike other methods that require thresholds, safeguards, and hierarchies, the proposed spectral-temporal method requires only simple filtering and a flat classifier.
Finally, we study how to perform sparse factorization when a large dictionary of musical atoms is already known. Sparse coding methods such as matching pursuit (MP) have been applied to problems in music information retrieval such as transcription and source separation with moderate success. However, when the set of dictionary atoms is large, identification of the best match in the dictionary with the residual is slow -- linear in the size of the dictionary. Here, we propose a variant called approximate matching pursuit (AMP) that is faster than MP while maintaining scalability and accuracy. Unlike MP, AMP uses an approximate nearest-neighbor (ANN) algorithm to find the closest match in a dictionary in sublinear time. One such ANN algorithm, locality-sensitive hashing (LSH), is a probabilistic hash algorithm that places similar, yet not identical, observations into the same bin. While the accuracy of AMP is comparable to similar MP methods, the computational complexity is reduced. Also, by using LSH, this method scales easily; the dictionary can be expanded without reorganizing any data structures
Consensus Matching Pursuit of Multi-Trial Biosignals, with Application to Brain Signals
submittedTime-frequency representations are commonly used to analyze the oscillatory nature of bioelectromagnetic signals. There is a growing interest in sparse representations, where the data is described using few components. In this study, we adapt the Matching Pursuit of Mallat and Zhang for biosignals consisting of a series of variations around a similar pattern, with emphasis on multi-trial datasets encountered in MEG and EEG. The general principle of Matching Pursuit (MP) is to iteratively subtract from the signal its projection on the atom selected from a dictionary. The originality of our method is to select each atom using a voting technique that is robust to variability, and to subtract it by adapting the parameters to each trial. Because it is designed to handle inter-trial variability using a voting technique, the method is called Consensus Matching Pursuit (CMP). The method is validated on both simplified and realistic simulations, and on two real datasets (intracerebral EEG and scalp EEG ).We also compare our method to two other multi-trial MP algorithms: Multivariate MP (MMP) and Induced activity MP (IMP). CMP is shown to be able to sparsely reveal the structure present in the data, and to be robust to variability (jitter) across trials
Intelligent sequence stratigraphy through a wavelet-based decomposition of well log data
Identification of sequence boundaries is an important task in geological characterization of gas reservoirs. In this study, a continuous wavelet transform (CWT) approach is applied to decompose gamma ray and porosity logs into a set of wavelet coefficients at varying scales. A discrete wavelet transform (DWT) is utilized to decompose well logs into smaller frequency bandwidths called Approximations (A) and Details (D). The methodology is illustrated by using a case study from the Ilam and upper Sarvak formations in the Dezful embayment, southwestern Iran. Different graphical visualization techniques of the continuous wavelet transform results allowed a better understanding of the main sequence boundaries. Using the DWT, maximum flooding surface was successfully recognised from both highest frequency and low frequency contents of signals. There is a sharp peak in all A&D corresponding to the maximum flooding surface (MFS), which can specifically be seen in fifth Approximation (a5), fifth Detail (d5), fourth Detail (d4) and third Detail (d3) coefficients. Sequence boundaries were best recognised from the low frequency contents of signals, especially the fifth Approximation (a5). Normally, the troughs of the fifth Approximation correspond to sequence boundaries where higher porosities developed in the Ilam and upper Sarvak carbonate rocks. Through hybridizing both CWT and DWT coefficient a more effective discrimination of sequence boundaries was achieved. The results of this study show that wavelet transform is a successful, fast and easy approach for identification of the main sequence boundaries from well log data. There is a good agreement between core derived system tracts and those derived from decomposition of well logs by using the wavelet transform approach
Recommended from our members
Can interhemispheric desynchronization of cerebral blood flow anticipate upcoming vasospasm in aneurysmal subarachnoid haemorrhage patients?
BACKGROUND: Asymmetry of cerebral autoregulation (CA) was demonstrated in patients after aneurysmal subarachnoid haemorrhage (aSAH). A classical method for CA assessment requires simultaneous measurement of both arterial blood pressure (ABP) and cerebral blood flow velocity (CBFV). In this study, we have proposed a cerebral blood flow asymmetry index based only on CBFV and analysed its association with the occurrence of vasospasm after aSAH. NEW METHOD: The phase shifts (PS) between slow oscillations in left and right CBFV (side-to-side PS) and between ABP and CBFV (CBFV-ABP PS) were estimated using multichannel matching pursuit (MMP) and cross-spectral analysis. RESULTS: We retrospectively analysed data collected from 45 aSAH patients (26 with vasospasm). Data were analysed up to 7th day after aSAH unless the vasospasm was detected earlier. A progressive asymmetry, manifested by a gradual increase in side-to-side PS on consecutive days after aSAH, was observed in patients who developed vasospasm (Radj2 = 0.14, p = 0.009). In these patients, early side-to-side PS was more positive than in patients without vasospasm (2.8° ± 5.6° vs -1.7° ± 5.7°, p = 0.011). No such a difference was found in CBFV-ABP PS. Patients with positive side-to-side PS were more likely to develop vasospasm than patients with negative side-to-side PS (21/7 vs 5/12, p = 0.0047). COMPARISON WITH EXISTING METHOD: MMP, in contrast to the spectral approach, accounts for non-stationarity of analysed signals. MMP applied to the PS estimation reflects the cerebral blood flow asymmetry in aSAH better than the spectral analysis. CONCLUSIONS: Changes in side-to-side PS might be helpful to identify patients who are at risk of vasospasm
Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic Cost Model
Best-first search has been recently utilized for compressed sensing (CS) by
the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we
concentrate on theoretical and empirical analyses of A*OMP. We present a
restricted isometry property (RIP) based general condition for exact recovery
of sparse signals via A*OMP. In addition, we develop online guarantees which
promise improved recovery performance with the residue-based termination
instead of the sparsity-based one. We demonstrate the recovery capabilities of
A*OMP with extensive recovery simulations using the adaptive-multiplicative
(AMul) cost model, which effectively compensates for the path length
differences in the search tree. The presented results, involving phase
transitions for different nonzero element distributions as well as recovery
rates and average error, reveal not only the superior recovery accuracy of
A*OMP, but also the improvements with the residue-based termination and the
AMul cost model. Comparison of the run times indicate the speed up by the AMul
cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the
search further. Finally, we run A*OMP on a sparse image to illustrate its
recovery performance for more realistic coefcient distributions
Recommended from our members
Time-frequency representation of earthquake accelerograms and inelastic structural response records using the adaptive chirplet decomposition and empirical mode decomposition
In this paper, the adaptive chirplet decomposition combined with the Wigner-Ville transform and the empirical mode decomposition combined with the Hilbert transform are employed to process various non-stationary signals (strong ground motions and structural responses). The efficacy of these two adaptive techniques for capturing the temporal evolution of the frequency content of specific seismic signals is assessed. In this respect, two near-field and two far-field seismic accelerograms are analyzed. Further, a similar analysis is performed for records pertaining to the response of a 20-story steel frame benchmark building excited by one of the four accelerograms scaled by appropriate factors to simulate undamaged and severely damaged conditions for the structure. It is shown that the derived joint time–frequency representations of the response time histories capture quite effectively the influence of non-linearity on the variation of the effective natural frequencies of a structural system during the evolution of a seismic event; in this context, tracing the mean instantaneous frequency of records of critical structural responses is adopted.
The study suggests, overall, that the aforementioned techniques are quite viable tools for detecting and monitoring damage to constructed facilities exposed to seismic excitations
- …