3,459 research outputs found

    Uniqueness Analysis of Non-Unitary Matrix Joint Diagonalization

    Full text link
    Matrix Joint Diagonalization (MJD) is a powerful approach for solving the Blind Source Separation (BSS) problem. It relies on the construction of matrices which are diagonalized by the unknown demixing matrix. Their joint diagonalizer serves as a correct estimate of this demixing matrix only if it is uniquely determined. Thus, a critical question is under what conditions a joint diagonalizer is unique. In the present work we fully answer this question about the identifiability of MJD based BSS approaches and provide a general result on uniqueness conditions of matrix joint diagonalization. It unifies all existing results which exploit the concepts of non-circularity, non-stationarity, non-whiteness, and non-Gaussianity. As a corollary, we propose a solution for complex BSS, which can be formulated in a closed form in terms of an eigenvalue and a singular value decomposition of two matrices.Comment: 23 page

    An Algebraic Approach to Non-Orthogonal General Joint Block Diagonalization

    Full text link
    The exact/approximate non-orthogonal general joint block diagonalization ({\sc nogjbd}) problem of a given real matrix set A={Ai}i=1m\mathcal{A}=\{A_i\}_{i=1}^m is to find a nonsingular matrix W∈Rn×nW\in\mathbb{R}^{n\times n} (diagonalizer) such that WTAiWW^T A_i W for i=1,2,…,mi=1,2,\dots, m are all exactly/approximately block diagonal matrices with the same diagonal block structure and with as many diagonal blocks as possible. In this paper, we show that a solution to the exact/approximate {\sc nogjbd} problem can be obtained by finding the exact/approximate solutions to the system of linear equations AiZ=ZTAiA_iZ=Z^TA_i for i=1,…,mi=1,\dots, m, followed by a block diagonalization of ZZ via similarity transformation. A necessary and sufficient condition for the equivalence of the solutions to the exact {\sc nogjbd} problem is established. Two numerical methods are proposed to solve the {\sc nogjbd} problem, and numerical examples are presented to show the merits of the proposed methods

    Solving General Joint Block Diagonalization Problem via Linearly Independent Eigenvectors of a Matrix Polynomial

    Full text link
    In this paper, we consider the exact/approximate general joint block diagonalization (GJBD) problem of a matrix set {Ai}i=0p\{A_i\}_{i=0}^p (p≥1p\ge 1), where a nonsingular matrix WW (often referred to as diagonalizer) needs to be found such that the matrices WHAiWW^{H}A_iW's are all exactly/approximately block diagonal matrices with as many diagonal blocks as possible. We show that the diagonalizer of the exact GJBD problem can be given by W=[x1,x2,…,xn]ΠW=[x_1, x_2, \dots, x_n]\Pi, where Π\Pi is a permutation matrix, xix_i's are eigenvectors of the matrix polynomial P(λ)=∑i=0pλiAiP(\lambda)=\sum_{i=0}^p\lambda^i A_i, satisfying that [x1,x2,…,xn][x_1, x_2, \dots, x_n] is nonsingular, and the geometric multiplicity of each λi\lambda_i corresponding with xix_i equals one. And the equivalence of all solutions to the exact GJBD problem is established. Moreover, theoretical proof is given to show why the approximate GJBD problem can be solved similarly to the exact GJBD problem. Based on the theoretical results, a three-stage method is proposed and numerical results show the merits of the method

    Independent component analysis for multivariate functional data

    Full text link
    We extend two methods of independent component analysis, fourth order blind identification and joint approximate diagonalization of eigen-matrices, to vector-valued functional data. Multivariate functional data occur naturally and frequently in modern applications, and extending independent component analysis to this setting allows us to distill important information from this type of data, going a step further than the functional principal component analysis. To allow the inversion of the covariance operator we make the assumption that the dependency between the component functions lies in a finite-dimensional subspace. In this subspace we define fourth cross-cumulant operators and use them to construct the two novel, Fisher consistent methods for solving the independent component problem for vector-valued functions. Both simulations and an application on a hand gesture data set show the usefulness and advantages of the proposed methods over functional principal component analysis.Comment: 39 pages, 3 figure

    Interference Mitigation via Relaying

    Full text link
    This paper studies the effectiveness of relaying for interference mitigation in an interference-limited communication scenario. We are motivated by the observation that in a cellular network, a relay node placed at the cell edge observes a combination of intended signal and inter-cell interference that is correlated with the received signal at a nearby destination, so a relaying link can effectively allow the antennas at the relay and at the destination to be pooled together for both signal enhancement and interference mitigation. We model this scenario by a MIMO Gaussian relay channel with a digital relay-to-destination link of finite capacity, and with correlated noise across the relay and destination antennas. Assuming a compress-and-forward strategy with Gaussian input distribution and quantization noise, we propose a coordinate ascent algorithm for obtaining a stationary point of the non-convex joint optimization of the transmit and quantization covariance matrices. For fixed input distribution, the globally optimum quantization noise covariance matrix can be found in closed-form using a transformation of the relay's observation that simultaneously diagonalizes two conditional covariance matrices by congruence. For fixed quantization, the globally optimum transmit covariance matrix can be found via convex optimization. This paper further shows that such an optimized achievable rate is within a constant additive gap of the MIMO relay channel capacity. The optimal structure of the quantization noise covariance enables a characterization of the slope of the achievable rate as a function of the relay-to-destination link capacity. Moreover, this paper shows that the improvement in spatial degrees of freedom by MIMO relaying in the presence of noise correlation is related to the aforementioned slope via a connection to the deterministic relay channel

    Application of Independent Component Analysis Techniques in Speckle Noise Reduction of Retinal OCT Images

    Full text link
    Optical Coherence Tomography (OCT) is an emerging technique in the field of biomedical imaging, with applications in ophthalmology, dermatology, coronary imaging etc. OCT images usually suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. Therefore the need for speckle noise reduction techniques is of high importance. To the best of our knowledge, use of Independent Component Analysis (ICA) techniques has never been explored for speckle reduction of OCT images. Here, a comparative study of several ICA techniques (InfoMax, JADE, FastICA and SOBI) is provided for noise reduction of retinal OCT images. Having multiple B-scans of the same location, the eye movements are compensated using a rigid registration technique. Then, different ICA techniques are applied to the aggregated set of B-scans for extracting the noise-free image. Signal-to-Noise-Ratio (SNR), Contrast-to-Noise-Ratio (CNR) and Equivalent-Number-of-Looks (ENL), as well as analysis on the computational complexity of the methods, are considered as metrics for comparison. The results show that use of ICA can be beneficial, especially in case of having fewer number of B-scans

    Blind source separation of tensor-valued time series

    Full text link
    The blind source separation model for multivariate time series generally assumes that the observed series is a linear transformation of an unobserved series with temporally uncorrelated or independent components. Given the observations, the objective is to find a linear transformation that recovers the latent series. Several methods for accomplishing this exist and three particular ones are the classic SOBI and the recently proposed generalized FOBI (gFOBI) and generalized JADE (gJADE), each based on the use of joint lagged moments. In this paper we generalize the methodologies behind these algorithms for tensor-valued time series. We assume that our data consists of a tensor observed at each time point and that the observations are linear transformations of latent tensors we wish to estimate. The tensorial generalizations are shown to have particularly elegant forms and we show that each of them is Fisher consistent and orthogonal equivariant. Comparing the new methods with the original ones in various settings shows that the tensorial extensions are superior to both their vector-valued counterparts and to two existing tensorial dimension reduction methods for i.i.d. data. Finally, applications to fMRI-data and video processing show that the methods are capable of extracting relevant information from noisy high-dimensional data.Comment: 26 pages, 6 figure

    Linked Component Analysis from Matrices to High Order Tensors: Applications to Biomedical Data

    Full text link
    With the increasing availability of various sensor technologies, we now have access to large amounts of multi-block (also called multi-set, multi-relational, or multi-view) data that need to be jointly analyzed to explore their latent connections. Various component analysis methods have played an increasingly important role for the analysis of such coupled data. In this paper, we first provide a brief review of existing matrix-based (two-way) component analysis methods for the joint analysis of such data with a focus on biomedical applications. Then, we discuss their important extensions and generalization to multi-block multiway (tensor) data. We show how constrained multi-block tensor decomposition methods are able to extract similar or statistically dependent common features that are shared by all blocks, by incorporating the multiway nature of data. Special emphasis is given to the flexible common and individual feature analysis of multi-block data with the aim to simultaneously extract common and individual latent components with desired properties and types of diversity. Illustrative examples are given to demonstrate their effectiveness for biomedical data analysis.Comment: 20 pages, 11 figures, Proceedings of the IEEE, 201

    Perturbation Analysis for Matrix Joint Block Diagonalization

    Full text link
    The matrix joint block diagonalization problem (JBDP) of a given matrix set A={Ai}i=1m\mathcal{A}=\{A_i\}_{i=1}^m is about finding a nonsingular matrix WW such that all WTAiWW^{T} A_i W are block diagonal. It includes the matrix joint diagonalization problem (JBD) as a special case for which all WTAiWW^{T} A_i W are required diagonal. Generically, such a matrix WW may not exist, but there are practically applications such as multidimensional independent component analysis (MICA) for which it does exist under the ideal situation, i.e., no noise is presented. However, in practice noises do get in and, as a consequence, the matrix set is only approximately block diagonalizable, i.e., one can only make all W~TAiW~\widetilde{W}^{T} A_i\widetilde{W} nearly block diagonal at best, where W~\widetilde{W} is an approximation to WW, obtained usually by computation. This motivates us to develop a perturbation theory for JBDP to address, among others, the question: how accurate this W~\widetilde{W} is. Previously such a theory for JDP has been discussed, but no effort has been attempted for JBDP yet. In this paper, with the help of a necessary and sufficient condition for solution uniqueness of JBDP recently developed in [Cai and Liu, {\em SIAM J. Matrix Anal. Appl.}, 38(1):50--71, 2017], we are able to establish an error bound, perform backward error analysis, and propose a condition number for JBDP. Numerical tests validate the theoretical results.Comment: 34 pages, 4 figure

    Approximate Joint Matrix Triangularization

    Full text link
    We consider the problem of approximate joint triangularization of a set of noisy jointly diagonalizable real matrices. Approximate joint triangularizers are commonly used in the estimation of the joint eigenstructure of a set of matrices, with applications in signal processing, linear algebra, and tensor decomposition. By assuming the input matrices to be perturbations of noise-free, simultaneously diagonalizable ground-truth matrices, the approximate joint triangularizers are expected to be perturbations of the exact joint triangularizers of the ground-truth matrices. We provide a priori and a posteriori perturbation bounds on the `distance' between an approximate joint triangularizer and its exact counterpart. The a priori bounds are theoretical inequalities that involve functions of the ground-truth matrices and noise matrices, whereas the a posteriori bounds are given in terms of observable quantities that can be computed from the input matrices. From a practical perspective, the problem of finding the best approximate joint triangularizer of a set of noisy matrices amounts to solving a nonconvex optimization problem. We show that, under a condition on the noise level of the input matrices, it is possible to find a good initial triangularizer such that the solution obtained by any local descent-type algorithm has certain global guarantees. Finally, we discuss the application of approximate joint matrix triangularization to canonical tensor decomposition and we derive novel estimation error bounds.Comment: 19 page
    • …
    corecore