46 research outputs found

    Blind source separation of underdetermined mixtures of event-related sources

    Get PDF
    International audienceThis paper addresses the problem of blind source separation for underdetermined mixtures (i.e., more sources than sensors) of event-related sources that include quasi-periodic sources (e.g., electrocardiogram (ECG)), sources with synchronized trials (e.g., event-related potentials (ERP)), and amplitude-variant sources. The proposed method is based on two steps: (i) tensor decomposition for underdetermined source separation and (ii) signal extraction by Kalman filtering to recover the source dynamics. A tensor is constructed for each source by synchronizing on the ''event'' period of the corresponding signal and stacking different periods along the second dimension of the tensor. To cope with the interference from other sources that impede on the extraction of weak signals, two robust tensor decomposition methods are proposed and compared. Then, the state parameters used within a nonlinear dynamic model for the extraction of event-related sources from noisy mixtures are estimated from the loading matrices provided by the first step. The influence of different parameters on the robustness to outliers of the proposed method is examined by numerical simulations. Applied to clinical electroencephalogram (EEG), ECG and magnetocardiogram (MCG), the proposed method exhibits a significantly higher performance in terms of expected signal shape than classical source separation methods such as piCA and FastICA

    Cram\'er-Rao Bounds for Complex-Valued Independent Component Extraction: Determined and Piecewise Determined Mixing Models

    Full text link
    This paper presents Cram\'er-Rao Lower Bound (CRLB) for the complex-valued Blind Source Extraction (BSE) problem based on the assumption that the target signal is independent of the other signals. Two instantaneous mixing models are considered. First, we consider the standard determined mixing model used in Independent Component Analysis (ICA) where the mixing matrix is square and non-singular and the number of the latent sources is the same as that of the observed signals. The CRLB for Independent Component Extraction (ICE) where the mixing matrix is re-parameterized in order to extract only one independent target source is computed. The target source is assumed to be non-Gaussian or non-circular Gaussian while the other signals (background) are circular Gaussian or non-Gaussian. The results confirm some previous observations known for the real domain and bring new results for the complex domain. Also, the CRLB for ICE is shown to coincide with that for ICA when the non-Gaussianity of background is taken into account. %unless the assumed sources' distributions are misspecified. Second, we extend the CRLB analysis to piecewise determined mixing models. Here, the observed signals are assumed to obey the determined mixing model within short blocks where the mixing matrices can be varying from block to block. However, either the mixing vector or the separating vector corresponding to the target source is assumed to be constant across the blocks. The CRLBs for the parameters of these models bring new performance bounds for the BSE problem.Comment: 25 pages, 8 figure

    A BLIND SOURCE SEPARATION METHOD FOR CONVOLVED MIXTURES BY NON-STATIONARY VIBRATION SIGNALS

    Full text link

    A review of second-order blind identification methods

    Get PDF
    Second-order source separation (SOS) is a data analysis tool which can be used for revealing hidden structures in multivariate time series data or as a tool for dimension reduction. Such methods are nowadays increasingly important as more and more high-dimensional multivariate time series data are measured in numerous fields of applied science. Dimension reduction is crucial, as modeling such high-dimensional data with multivariate time series models is often impractical as the number of parameters describing dependencies between the component time series is usually too high. SOS methods have their roots in the signal processing literature, where they were first used to separate source signals from an observed signal mixture. The SOS model assumes that the observed time series (signals) is a linear mixture of latent time series (sources) with uncorrelated components. The methods make use of the second-order statistics-hence the name "second-order source separation." In this review, we discuss the classical SOS methods and their extensions to more complex settings. An example illustrates how SOS can be performed.This article is categorized under:Statistical Models > Time Series ModelsStatistical and Graphical Methods of Data Analysis > Dimension ReductionData: Types and Structure > Time Series, Stochastic Processes, and Functional Dat

    Dictionary Learning for Sparse Representations With Applications to Blind Source Separation.

    Get PDF
    During the past decade, sparse representation has attracted much attention in the signal processing community. It aims to represent a signal as a linear combination of a small number of elementary signals called atoms. These atoms constitute a dictionary so that a signal can be expressed by the multiplication of the dictionary and a sparse coefficients vector. This leads to two main challenges that are studied in the literature, i.e. sparse coding (find the coding coefficients based on a given dictionary) and dictionary design (find an appropriate dictionary to fit the data). Dictionary design is the focus of this thesis. Traditionally, the signals can be decomposed by the predefined mathematical transform, such as discrete cosine transform (DCT), which forms the so-called analytical approach. In recent years, learning-based methods have been introduced to adapt the dictionary from a set of training data, leading to the technique of dictionary learning. Although this may involve a higher computational complexity, learned dictionaries have the potential to offer improved performance as compared with predefined dictionaries. Dictionary learning algorithm is often achieved by iteratively executing two operations: sparse approximation and dictionary update. We focus on the dictionary update step, where the dictionary is optimized with a given sparsity pattern. A novel framework is proposed to generalize benchmark mechanisms such as the method of optimal directions (MOD) and K-SVD where an arbitrary set of codewords and the corresponding sparse coefficients are simultaneously updated, hence the term simultaneous codeword optimization (SimCO). Moreover, its extended formulation ‘regularized SimCO’ mitigates the major bottleneck of dictionary update caused by the singular points. First and second order optimization procedures are designed to solve the primitive and regularized SimCO. In addition, a tree-structured multi-level representation of dictionary based on clustering is used to speed up the optimization process in the sparse coding stage. This novel dictionary learning algorithm is also applied for solving the underdetermined blind speech separation problem, leading to a multi-stage method, where the separation problem is reformulated as a sparse coding problem, with the dictionary being learned by an adaptive algorithm. Using mutual coherence and sparsity index, the performance of a variety of dictionaries for underdetermined speech separation is compared and analyzed, such as the dictionaries learned from speech mixtures and ground truth speech sources, as well as those predefined by mathematical transforms. Finally, we propose a new method for joint dictionary learning and source separation. Different from the multistage method, the proposed method can simultaneously estimate the mixing matrix, the dictionary and the sources in an alternating and blind manner. The advantages of all the proposed methods are demonstrated over the state-of-the-art methods using extensive numerical tests

    Joint Tensor Factorization and Outlying Slab Suppression with Applications

    Full text link
    We consider factoring low-rank tensors in the presence of outlying slabs. This problem is important in practice, because data collected in many real-world applications, such as speech, fluorescence, and some social network data, fit this paradigm. Prior work tackles this problem by iteratively selecting a fixed number of slabs and fitting, a procedure which may not converge. We formulate this problem from a group-sparsity promoting point of view, and propose an alternating optimization framework to handle the corresponding p\ell_p (0<p10<p\leq 1) minimization-based low-rank tensor factorization problem. The proposed algorithm features a similar per-iteration complexity as the plain trilinear alternating least squares (TALS) algorithm. Convergence of the proposed algorithm is also easy to analyze under the framework of alternating optimization and its variants. In addition, regularization and constraints can be easily incorporated to make use of \emph{a priori} information on the latent loading factors. Simulations and real data experiments on blind speech separation, fluorescence data analysis, and social network mining are used to showcase the effectiveness of the proposed algorithm

    Tensors: a Brief Introduction

    No full text
    International audienceTensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief survey

    Blind identification of mixtures of quasi-stationary sources.

    Get PDF
    由於在盲語音分離的應用,線性準平穩源訊號混合的盲識別獲得了巨大的研究興趣。在這個問題上,我們利用準穩態源訊號的時變特性來識別未知的混合系統系數。傳統的方法有二:i)基於張量分解的平行因子分析(PARAFAC);ii)基於對多個矩陣的聯合對角化的聯合對角化算法(JD)。一般來說,PARAFAC和JD 都採用了源聯合的提取方法;即是說,對應所有訊號源的系統係數在升法上是用時進行識別的。在這篇論文中,我利用Khati-Rao(KR)子空間來設計一種新的盲識別算法。在我設計的算法中提出一種與傳統的方法不同的提法。在我設計的算法中,盲識別問題被分解成數個結構上相對簡單的子問題,分別對應不同的源。在超定混合模型,我們提出了一個專門的交替投影算法(AP)。由此產生的算法,不但能從經驗發現是非常有競爭力的,而且更有理論上的利落收斂保證。另外,作為一個有趣的延伸,該算法可循一個簡單的方式應用於欠混合模型。對於欠定混合模型,我們提出啟發式的秩最小化算法從而提高算法的速度。Blind identification of linear instantaneous mixtures of quasi-stationary sources (BI-QSS) has received great research interest over the past few decades, motivated by its application in blind speech separation. In this problem, we identify the unknown mixing system coefcients by exploiting the time-varying characteristics of quasi-stationary sources. Traditional BI-QSS methods fall into two main categories: i) Parallel Factor Analysis (PARAFAC), which is based on tensor decomposition; ii) Joint Diagonalization (JD), which is based on approximate joint diagonalization of multiple matrices. In both PARAFAC and JD, the joint-source formulation is used in general; i.e., the algorithms are designed to identify the whole mixing system simultaneously.In this thesis, I devise a novel blind identification framework using a Khatri-Rao (KR) subspace formulation. The proposed formulation is different from the traditional formulations in that it decomposes the blind identication problem into a number of per-source, structurally less complex subproblems. For the over determined mixing models, a specialized alternating projections algorithm is proposed for the KR subspace for¬mulation. The resulting algorithm is not only empirically found to be very competitive, but also has a theoretically neat convergence guarantee. Even better, the proposed algorithm can be applied to the underdetermined mixing models in a straightforward manner. Rank minimization heuristics are proposed to speed up the algorithm for the underdetermined mixing model. The advantages on employing the rank minimization heuristics are demonstrated by simulations.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Lee, Ka Kit.Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 72-76).Abstracts also in Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 2 --- Settings of Quasi-Stationary Signals based Blind Identification --- p.4Chapter 2.1 --- Signal Model --- p.4Chapter 2.2 --- Assumptions --- p.5Chapter 2.3 --- Local Covariance Model --- p.7Chapter 2.4 --- Noise Covariance Removal --- p.8Chapter 2.5 --- Prewhitening --- p.9Chapter 2.6 --- Summary --- p.10Chapter 3 --- Review on Some Existing BI-QSS Algorithms --- p.11Chapter 3.1 --- Joint Diagonalization --- p.11Chapter 3.1.1 --- Fast Frobenius Diagonalization [4] --- p.12Chapter 3.1.2 --- Pham’s JD [5, 6] --- p.14Chapter 3.2 --- Parallel Factor Analysis --- p.16Chapter 3.2.1 --- Tensor Decomposition [37] --- p.17Chapter 3.2.2 --- Alternating-Columns Diagonal-Centers [12] --- p.21Chapter 3.2.3 --- Trilinear Alternating Least-Squares [10, 11] --- p.23Chapter 3.3 --- Summary --- p.25Chapter 4 --- Proposed Algorithms --- p.26Chapter 4.1 --- KR Subspace Criterion --- p.27Chapter 4.2 --- Blind Identification using Alternating Projections --- p.29Chapter 4.2.1 --- All-Columns Identification --- p.31Chapter 4.3 --- Overdetermined Mixing Models (N > K): Prewhitened Alternating Projection Algorithm (PAPA) --- p.32Chapter 4.4 --- Underdetermined Mixing Models (N <K) --- p.34Chapter 4.4.1 --- Rank Minimization Heuristic --- p.34Chapter 4.4.2 --- Alternating Projections Algorithm with Huber Function Regularization --- p.37Chapter 4.5 --- Robust KR Subspace Extraction --- p.40Chapter 4.6 --- Summary --- p.44Chapter 5 --- Simulation Results --- p.47Chapter 5.1 --- General Settings --- p.47Chapter 5.2 --- Overdetermined Mixing Models --- p.49Chapter 5.2.1 --- Simulation 1 - Performance w.r.t. SNR --- p.49Chapter 5.2.2 --- Simulation 2 - Performance w.r.t. the Number of Available Frames M --- p.49Chapter 5.2.3 --- Simulation 3 - Performance w.r.t. the Number of Sources K --- p.50Chapter 5.3 --- Underdetermined Mixing Models --- p.52Chapter 5.3.1 --- Simulation 1 - Success Rate of KR Huber --- p.53Chapter 5.3.2 --- Simulation 2 - Performance w.r.t. SNR --- p.54Chapter 5.3.3 --- Simulation 3 - Performance w.r.t. M --- p.54Chapter 5.3.4 --- Simulation 4 - Performance w.r.t. N --- p.56Chapter 5.4 --- Summary --- p.56Chapter 6 --- Conclusion and Future Works --- p.58Chapter A --- Convolutive Mixing Model --- p.60Chapter B --- Proofs --- p.63Chapter B.1 --- Proof of Theorem 4.1 --- p.63Chapter B.2 --- Proof of Theorem 4.2 --- p.65Chapter B.3 --- Proof of Observation 4.1 --- p.65Chapter B.4 --- Proof of Proposition 4.1 --- p.66Chapter C --- Singular Value Thresholding --- p.67Chapter D --- Categories of Speech Sounds and Their Impact on SOSs-based BI-QSS Algorithms --- p.69Chapter D.1 --- Vowels --- p.69Chapter D.2 --- Consonants --- p.69Chapter D.1 --- Silent Pauses --- p.70Bibliography --- p.7
    corecore