89 research outputs found

    Data-driven Signal Decomposition Approaches: A Comparative Analysis

    Full text link
    Signal decomposition (SD) approaches aim to decompose non-stationary signals into their constituent amplitude- and frequency-modulated components. This represents an important preprocessing step in many practical signal processing pipelines, providing useful knowledge and insight into the data and relevant underlying system(s) while also facilitating tasks such as noise or artefact removal and feature extraction. The popular SD methods are mostly data-driven, striving to obtain inherent well-behaved signal components without making many prior assumptions on input data. Among those methods include empirical mode decomposition (EMD) and variants, variational mode decomposition (VMD) and variants, synchrosqueezed transform (SST) and variants and sliding singular spectrum analysis (SSA). With the increasing popularity and utility of these methods in wide-ranging application, it is imperative to gain a better understanding and insight into the operation of these algorithms, evaluate their accuracy with and without noise in input data and gauge their sensitivity against algorithmic parameter changes. In this work, we achieve those tasks through extensive experiments involving carefully designed synthetic and real-life signals. Based on our experimental observations, we comment on the pros and cons of the considered SD algorithms as well as highlighting the best practices, in terms of parameter selection, for the their successful operation. The SD algorithms for both single- and multi-channel (multivariate) data fall within the scope of our work. For multivariate signals, we evaluate the performance of the popular algorithms in terms of fulfilling the mode-alignment property, especially in the presence of noise.Comment: Resubmission with changes in the reference lis

    Direct Signal Separation Via Extraction of Local Frequencies with Adaptive Time-Varying Parameters

    Full text link
    In nature, real-world phenomena that can be formulated as signals (or in terms of time series) are often affected by a number of factors and appear as multi-component modes. The natural approach to understand and process such phenomena is to decompose, or even better, to separate the multi-component signals to their basic building blocks (called sub-signals or time-series components, or fundamental modes). Recently the synchro-squeezing transform (SST) and its variants have been developed for nonstationary signal separation. More recently, a direct method of the time-frequency approach, called signal separation operation (SSO), was introduced for multi-component signal separation. While both SST and SSO are mathematically rigorous on the instantaneous frequency (IF) estimation, SSO avoids the second step of the two-step SST method in signal separation, which depends heavily on the accuracy of the estimated IFs. In the present paper, we solve the signal separation problem by constructing an adaptive signal separation operator (ASSO) for more effective separation of the blind-source multi-component signal, via introducing a time-varying parameter that adapts to local IFs. A recovery scheme is also proposed to extract the signal components one by one, and the time-varying parameter is updated for each component. The proposed method is suitable for engineering implementation, being capable of separating complicated signals into their sub-signals and reconstructing the signal trend directly. Numerical experiments on synthetic and real-world signals are presented to demonstrate our improvement over the previous attempts

    Single channel blind source separation

    Get PDF
    Single channel blind source separation (SCBSS) is an intensively researched field with numerous important applications. This research sets out to investigate the separation of monaural mixed audio recordings without relying on training knowledge. This research proposes a novel method based on variable regularised sparse nonnegative matrix factorization which decomposes an information-bearing matrix into two-dimensional convolution of factor matrices that represent the spectral basis and temporal code of the sources. In this work, a variational Bayesian approach has been developed for computing the sparsity parameters of the matrix factorization. To further improve the previous work, this research proposes a new method based on decomposing the mixture into a series of oscillatory components termed as the intrinsic mode functions (IMF). It is shown that IMFs have several desirable properties unique to SCBSS problem and how these properties can be advantaged to relax the constraints posed by the problem. In addition, this research develops a novel method for feature extraction using psycho-acoustic model. The monaural mixed signal is transformed to a cochleagram using the gammatone filterbank, whose bandwidths increase incrementally as the center frequency increases; thus resulting to non-uniform time-frequency (TF) resolution in the analysis of audio signal. Within this domain, a family of Itakura-Saito (IS) divergence based novel two-dimensional matrix factorization has been developed. The proposed matrix factorizations have the property of scale invariant which enables lower energy components in the cochleagram to be treated with equal importance as the high energy ones. Results show that all the developed algorithms presented in this thesis have outperformed conventional methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Statistical Properties and Applications of Empirical Mode Decomposition

    Get PDF
    Signal analysis is key to extracting information buried in noise. The decomposition of signal is a data analysis tool for determining the underlying physical components of a processed data set. However, conventional signal decomposition approaches such as wavelet analysis, Wagner-Ville, and various short-time Fourier spectrograms are inadequate to process real world signals. Moreover, most of the given techniques require \emph{a prior} knowledge of the processed signal, to select the proper decomposition basis, which makes them improper for a wide range of practical applications. Empirical Mode Decomposition (EMD) is a non-parametric and adaptive basis driver that is capable of breaking-down non-linear, non-stationary signals into an intrinsic and finite components called Intrinsic Mode Functions (IMF). In addition, EMD approximates a dyadic filter that isolates high frequency components, e.g. noise, in higher index IMFs. Despite of being widely used in different applications, EMD is an ad hoc solution. The adaptive performance of EMD comes at the expense of formulating a theoretical base. Therefore, numerical analysis is usually adopted in literature to interpret the behavior. This dissertation involves investigating statistical properties of EMD and utilizing the outcome to enhance the performance of signal de-noising and spectrum sensing systems. The novel contributions can be broadly summarized in three categories: a statistical analysis of the probability distributions of the IMFs and a suggestion of Generalized Gaussian distribution (GGD) as a best fit distribution; a de-noising scheme based on a null-hypothesis of IMFs utilizing the unique filter behavior of EMD; and a novel noise estimation approach that is used to shift semi-blind spectrum sensing techniques into fully-blind ones based on the first IMF. These contributions are justified statistically and analytically and include comparison with other state of art techniques

    A Novel Diffusion-based Empirical Mode Decomposition Algorithm for Signal and Image Analysis

    Get PDF
    In the area of signal analysis and processing, the Fourier transform and wavelet transform are widely applied. Empirical Mode Decomposition(EMD) was proposed as an alternative frequency analysis tool. Although shown to be effective when analyzing non-stationary signals, the algorithmic nature of EMD makes the theoretical analysis and formulation difficult. Futhermore, it has some limitations that affect its performance. In this thesis, we introduce some methods to extend or modify EMD, in an effort to provide a rigorous mathematical basis for it, and to overcome its shortcomings. We propose a novel diffusion-based EMD algorithm that replaces the interpolation process by a diffusion equation, and directly construct the mean curve (surface) of a signal (image). We show that the new method simplifies the mathematical analysis, and provides a solid theory that interprets the EMD mechanism. In addition, we apply the new method to the 1D and 2D signal analysis showing its possible applications in audio and image signal processing. Finally, numerical experiments for synthetic and real signals (both 1D and 2D) are presented. Simulation results demonstrate that our new algorithm can overcome some of the shortcomings of EMD, and require much less computation time

    A Quantitative Measure of Mono-Componentness for Time-Frequency Analysis

    Get PDF
    Joint time-frequency (TF) analysis is an ideal method for analyzing non-stationary signals, but is challenging to use leading to it often being neglected. The exceptions being the short-time Fourier transform (STFT) and spectrogram. Even then, the inability to have simultaneously high time and frequency resolution is a frustrating issue with the STFT and spectrogram. However, there is a family of joint TF analysis techniques that do have simultaneously high time and frequency resolution – the quadratic TF distribution (QTFD) family. Unfortunately, QTFDs are often more troublesome than beneficial. The issue is interference/cross-terms that causes these methods to become so difficult to use. They require that the “proper” joint distribution be selected based on information that is typically unavailable for real-world signals. However, QTFDs do not produce cross-terms when applied to a mono-component signal. Clearly, determining the mono-componentness of a signal provides a key piece of information. However, until now, the means for determining if a signal is a monocomponent or a multi-component has been to choose a QTFD, generate the TF representation (TFR), and visually examine it. The work presented here provides a method for quantitatively determining if a signal is a mono-component. This new capability provides an important step towards finally allowing QTFDs to be used on multi-component signals, while producing few to no interference terms through enabling the use of the quadratic superposition property. The focus of this work is on establishing the legitimacy for “measuring” mono-componentness along with its algorithmic implementation. Several applications are presented, such as quantifying the quality of the decomposition results produced by the blind decomposition algorithm, Empirical Mode Decomposition (EMD). The mono-componentness measure not only provides an objective means to validate the outcome of a decomposition algorithm, it also provides a practical, quantitative metric for their comparison. More importantly, this quantitative measurement encapsulates mono-componentness in a form which can actually be incorporated in the design of decomposition algorithms as a viable condition/constraint so that true mono-components could be extracted. Incorporating the mono-component measure into a decomposition algorithm will eventually allow interference free TFRs to be calculated from multi-component signals without requiring prior knowledge

    Enhanced sparse component analysis for operational modal identification of real-life bridge structures

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record.Blind source separation receives increasing attention as an alternative tool for operational modal analysis in civil applications. However, the implementations on real-life structures in literature are rare, especially in the case of using limited sensors. In this study, an enhanced version of sparse component analysis is proposed for output-only modal identification with less user involvement compared with the existing work. The method is validated on ambient and non-stationary vibration signals collected from two bridge structures with the working performance evaluated by the classic operational modal analysis methods, stochastic subspace identification and natural excitation technique combined with the eigensystem realisation algorithm (NExT/ERA). Analysis results indicate that the method is capable of providing comparative results about modal parameters as the NExT/ERA for ambient vibration data. The method is also effective in analysing non-stationary signals due to heavy truck loads or human excitations and capturing small changes in mode shapes and modal frequencies of bridges. Additionally, closely-spaced and low-energy modes can be easily identified. The proposed method indicates the potential for automatic modal identification on field test data.The third author gratefully thanks the funding from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 330195

    Application of Singular Spectrum Analysis (SSA), Independent Component Analysis (ICA) and Empirical Mode Decomposition (EMD) for automated solvent suppression and automated baseline and phase correction from multi-dimensional NMR spectra

    Get PDF
    A common problem on protein structure determination by NMR spectroscopy is due to the solvent artifact. Typically, a deuterated solvent is used instead of normal water. However, several experimental methods have been developed to suppress the solvent signal in the case that one has to use a protonated solvent or if the signals of the remaining protons even in a highly deuterated sample are still too strong. For a protein dissolved in 90% H2O / 10% D2O, the concentration of solvent protons is about five orders of magnitude greater than the concentration of the protons of interest in the solute. Therefore, the evaluation of multi-dimensional NMR spectra may be incomplete since certain resonances of interest (e.g. Hα proton resonances) are hidden by the solvent signal and since signal parts of the solvent may be misinterpreted as cross peaks originating from the protein. The experimental solvent suppression procedures typically are not able to recover these significant protein signals. Many post-processing methods have been designed in order to overcome this problem. In this work, several algorithms for the suppression of the water signal have been developed and compared. In particular, it has been shown that the Singular Spectrum Analysis (SSA) can be applied advantageously to remove the solvent artifact from NMR spectra of any dimensionality both digitally and analogically acquired. In particular, the investigated time domain signals (FIDs) are decomposed into water and protein related components by means of an initial embedding of the data in the space of time-delayed coordinates. Eigenvalue decomposition is applied on these data and the component with the highest variance (typically represented by the dominant solvent signal) is neglected before reverting the embedding. Pre-processing (group delay management and signal normalization) and post-processing (inverse normalization, Fourier transformation and phase and baseline corrections) of the NMR data is mandatory in order to obtain a better performance of the suppression. The optimal embedding dimension has been empirically determined in accordance to a specific qualitative and quantitative analysis of the extracted components applied on a back-calculated two-dimensional spectrum of HPr protein from Staphylococcus aureus. Moreover, the investigation of experimental data (three-dimensional 1H13C HCCH-TOCSY spectrum of Trx protein from Plasmodium falciparum and two-dimensional NOESY and TOCSY spectra of HPr protein from Staphylococcus aureus) has revealed the ability of the algorithm to recover resonances hidden underneath the water signal. Pathological diseases and the effects of drugs and lifestyle can be detected from NMR spectroscopy applied on samples containing biofluids (e.g. urine, blood, saliva). The detection of signals of interest in such spectra can be hampered by the solvent as well. The SSA has also been successfully applied to one-dimensional urine, blood and cell spectra. The algorithm for automated solvent suppression has been introduced in the AUREMOL software package (AUREMOL_SSA). It is optionally followed by an automated baseline correction in the frequency domain (AUREMOL_ALS) that can be also used out the former algorithm. The automated recognition of baseline points is differently performed in dependence on the dimensionality of the data. In order to investigate the limitations of the SSA, it has been applied to spectra whose dominant signal is not the solvent (as in case of watergate solvent suppression and in case of back-calculated data not including any experimental water signal) determining the optimal solvent-to-solute ratio. The Independent Component Analysis (ICA) represents a valid alternative for water suppression when the solvent signal is not the dominant one in the spectra (when it is smaller than the half of the strongest solute resonance). In particular, two components are obtained: the solvent and the solute. The ICA needs as input at least as many different spectra (mixtures) as the number of components (source signals), thus the definition of a suitable protocol for generating a dataset of one-dimensional ICA-tailored inputs is straightforward. The ICA has revealed to overcome the SSA limitations and to be able to recover resonances of interest that cannot be detected applying the SSA. The ICA avoids all the pre- and post-processing steps, since it is directly applied in the frequency domain. On the other hand, the selection of the component to be removed is automatically detected in the SSA case (having the highest variance). In the ICA, a visual inspection of the extracted components is still required considering that the output is permutable and scale and sign ambiguities may occur. The Empirical Mode Decomposition (EMD) has revealed to be more suitable for automated phase correction than for solvent suppression purposes. It decomposes the FID into several intrinsic mode functions (IMFs) whose frequency of oscillation decreases from the first to the last ones (that identifies the solvent signal). The automatically identified non-baseline regions in the Fourier transform of the sum of the first IMFs are separately evaluated and genetic algorithms are applied in order to determine the zero- and first-order terms suitable for an optimal phase correction. The SSA and the ALS algorithms have been applied before assigning the two-dimensional NOESY spectrum (with the program KNOWNOE) of the PSCD4-domain of the pleuralin protein in order to increase the number of already existing distance restraints. A new routine to derive 3JHNHα couplings from torsion angles (Karplus relation) and vice versa, has been introduced in the AUREMOL software. Using the newly developed tools a refined three-dimensional structure of the PSCD4-domain could be obtained
    • …
    corecore