79 research outputs found

    Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    Get PDF
    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods

    Blind Separation of Independent Sources from Convolutive Mixtures

    Get PDF
    International audienceThe problem of separating blindly independent sources from a convolutive mixture cannot be addressed in its widest generality without resorting to statistics of order higher than two. The core of the problem is in fact to identify the para-unitary part of the mixture, which is addressed in this paper. With this goal, a family of statistical contrast is first defined. Then it is shown that the problem reduces to a Partial Approximate Joint Diagonalization (PAJOD) of several cumulant matrices. Then, a numerical algorithm is devised, which works block-wise, and sweeps all the output pairs. Computer simulations show the good behavior of the algorithm in terms of Symbol Error Rates, even on very short data blocks

    Independent component analysis for non-standard data structures

    Get PDF
    Independent component analysis is a classical multivariate tool used for estimating independent sources among collections of mixed signals. However, modern forms of data are typically too complex for the basic theory to adequately handle. In this thesis extensions of independent component analysis to three cases of non-standard data structures are developed: noisy multivariate data, tensor-valued data and multivariate functional data. In each case we define the corresponding independent component model along with the related assumptions and implications. The proposed estimators are mostly based on the use of kurtosis and its analogues for the considered structures, resulting into functionals of rather unified form, regardless of the type of the data. We prove the Fisher consistencies of the estimators and particular weight is given to their limiting distributions, using which comparisons between the methods are also made.Riippumattomien komponenttien analyysi on moniulotteisen tilastotieteen työkalu,jota käytetään estimoimaan riippumattomia lähdesignaaleja sekoitettujen signaalien joukosta. Modernit havaintoaineistot ovat kuitenkin tyypillisesti rakenteeltaan liian monimutkaisia, jotta niitä voitaisiin lähestyä alan perinteisillä menetelmillä. Tässä väitöskirjatyössä esitellään laajennukset riippumattomien komponenttien analyysin teoriasta kolmelle epästandardille aineiston muodolle: kohinaiselle moniulotteiselle datalle, tensoriarvoiselle datalle ja moniulotteiselle funktionaaliselle datalle. Kaikissa tapauksissa määriteläään vastaava riippumattomien komponenttien malli oletuksineen ja seurauksineen. Esitellyt estimaattorit pohjautuvat enimmäkseen huipukkuuden ja sen laajennuksien käyttöönottoon ja saatavat funktionaalit ovat analyyttisesti varsin yhtenäisen muotoisia riippumatta aineiston tyypistä. Kaikille estimaattoreille näytetään niiden Fisher-konsistenttisuus ja painotettuna on erityisesti estimaattoreiden rajajakaumat, jotka mahdollistavat teoreettiset vertailut eri menetelmien välillä

    Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise

    Get PDF
    We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding. It extends the ordinary ICA model in a theoretically sound and explicit way to incorporate group-wise (or environment-wise) confounding. We show that our proposed general noise model allows to perform ICA in settings where other noisy ICA procedures fail. Additionally, it can be used for applications with grouped data by adjusting for different stationary noise within each group. Our proposed noise model has a natural relation to causality and we explain how it can be applied in the context of causal inference. In addition to our theoretical framework, we provide an efficient estimation procedure and prove identifiability of the unmixing matrix under mild assumptions. Finally, we illustrate the performance and robustness of our method on simulated data, provide audible and visual examples, and demonstrate the applicability to real-world scenarios by experiments on publicly available Antarctic ice core data as well as two EEG data sets. We provide a scikit-learn compatible pip-installable Python package coroICA as well as R and Matlab implementations accompanied by a documentation at https://sweichwald.de/coroICA/Comment: equal contribution between Pfister and Weichwal

    Blind identification of mixtures of quasi-stationary sources.

    Get PDF
    由於在盲語音分離的應用,線性準平穩源訊號混合的盲識別獲得了巨大的研究興趣。在這個問題上,我們利用準穩態源訊號的時變特性來識別未知的混合系統系數。傳統的方法有二:i)基於張量分解的平行因子分析(PARAFAC);ii)基於對多個矩陣的聯合對角化的聯合對角化算法(JD)。一般來說,PARAFAC和JD 都採用了源聯合的提取方法;即是說,對應所有訊號源的系統係數在升法上是用時進行識別的。在這篇論文中,我利用Khati-Rao(KR)子空間來設計一種新的盲識別算法。在我設計的算法中提出一種與傳統的方法不同的提法。在我設計的算法中,盲識別問題被分解成數個結構上相對簡單的子問題,分別對應不同的源。在超定混合模型,我們提出了一個專門的交替投影算法(AP)。由此產生的算法,不但能從經驗發現是非常有競爭力的,而且更有理論上的利落收斂保證。另外,作為一個有趣的延伸,該算法可循一個簡單的方式應用於欠混合模型。對於欠定混合模型,我們提出啟發式的秩最小化算法從而提高算法的速度。Blind identification of linear instantaneous mixtures of quasi-stationary sources (BI-QSS) has received great research interest over the past few decades, motivated by its application in blind speech separation. In this problem, we identify the unknown mixing system coefcients by exploiting the time-varying characteristics of quasi-stationary sources. Traditional BI-QSS methods fall into two main categories: i) Parallel Factor Analysis (PARAFAC), which is based on tensor decomposition; ii) Joint Diagonalization (JD), which is based on approximate joint diagonalization of multiple matrices. In both PARAFAC and JD, the joint-source formulation is used in general; i.e., the algorithms are designed to identify the whole mixing system simultaneously.In this thesis, I devise a novel blind identification framework using a Khatri-Rao (KR) subspace formulation. The proposed formulation is different from the traditional formulations in that it decomposes the blind identication problem into a number of per-source, structurally less complex subproblems. For the over determined mixing models, a specialized alternating projections algorithm is proposed for the KR subspace for¬mulation. The resulting algorithm is not only empirically found to be very competitive, but also has a theoretically neat convergence guarantee. Even better, the proposed algorithm can be applied to the underdetermined mixing models in a straightforward manner. Rank minimization heuristics are proposed to speed up the algorithm for the underdetermined mixing model. The advantages on employing the rank minimization heuristics are demonstrated by simulations.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Lee, Ka Kit.Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 72-76).Abstracts also in Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 2 --- Settings of Quasi-Stationary Signals based Blind Identification --- p.4Chapter 2.1 --- Signal Model --- p.4Chapter 2.2 --- Assumptions --- p.5Chapter 2.3 --- Local Covariance Model --- p.7Chapter 2.4 --- Noise Covariance Removal --- p.8Chapter 2.5 --- Prewhitening --- p.9Chapter 2.6 --- Summary --- p.10Chapter 3 --- Review on Some Existing BI-QSS Algorithms --- p.11Chapter 3.1 --- Joint Diagonalization --- p.11Chapter 3.1.1 --- Fast Frobenius Diagonalization [4] --- p.12Chapter 3.1.2 --- Pham’s JD [5, 6] --- p.14Chapter 3.2 --- Parallel Factor Analysis --- p.16Chapter 3.2.1 --- Tensor Decomposition [37] --- p.17Chapter 3.2.2 --- Alternating-Columns Diagonal-Centers [12] --- p.21Chapter 3.2.3 --- Trilinear Alternating Least-Squares [10, 11] --- p.23Chapter 3.3 --- Summary --- p.25Chapter 4 --- Proposed Algorithms --- p.26Chapter 4.1 --- KR Subspace Criterion --- p.27Chapter 4.2 --- Blind Identification using Alternating Projections --- p.29Chapter 4.2.1 --- All-Columns Identification --- p.31Chapter 4.3 --- Overdetermined Mixing Models (N > K): Prewhitened Alternating Projection Algorithm (PAPA) --- p.32Chapter 4.4 --- Underdetermined Mixing Models (N <K) --- p.34Chapter 4.4.1 --- Rank Minimization Heuristic --- p.34Chapter 4.4.2 --- Alternating Projections Algorithm with Huber Function Regularization --- p.37Chapter 4.5 --- Robust KR Subspace Extraction --- p.40Chapter 4.6 --- Summary --- p.44Chapter 5 --- Simulation Results --- p.47Chapter 5.1 --- General Settings --- p.47Chapter 5.2 --- Overdetermined Mixing Models --- p.49Chapter 5.2.1 --- Simulation 1 - Performance w.r.t. SNR --- p.49Chapter 5.2.2 --- Simulation 2 - Performance w.r.t. the Number of Available Frames M --- p.49Chapter 5.2.3 --- Simulation 3 - Performance w.r.t. the Number of Sources K --- p.50Chapter 5.3 --- Underdetermined Mixing Models --- p.52Chapter 5.3.1 --- Simulation 1 - Success Rate of KR Huber --- p.53Chapter 5.3.2 --- Simulation 2 - Performance w.r.t. SNR --- p.54Chapter 5.3.3 --- Simulation 3 - Performance w.r.t. M --- p.54Chapter 5.3.4 --- Simulation 4 - Performance w.r.t. N --- p.56Chapter 5.4 --- Summary --- p.56Chapter 6 --- Conclusion and Future Works --- p.58Chapter A --- Convolutive Mixing Model --- p.60Chapter B --- Proofs --- p.63Chapter B.1 --- Proof of Theorem 4.1 --- p.63Chapter B.2 --- Proof of Theorem 4.2 --- p.65Chapter B.3 --- Proof of Observation 4.1 --- p.65Chapter B.4 --- Proof of Proposition 4.1 --- p.66Chapter C --- Singular Value Thresholding --- p.67Chapter D --- Categories of Speech Sounds and Their Impact on SOSs-based BI-QSS Algorithms --- p.69Chapter D.1 --- Vowels --- p.69Chapter D.2 --- Consonants --- p.69Chapter D.1 --- Silent Pauses --- p.70Bibliography --- p.7
    corecore