5 research outputs found

    Nonorthogonal approximate joint diagonalization with well-conditioned diagonalizers

    Full text link
    To make the results reasonable, existing joint diagonalization algorithms have imposed a variety of constraints on diagonalizers. Actually, those constraints can be imposed uniformly by minimizing the condition number of diagonalizers. Motivated by this, the approximate joint diagonalization problem is reviewed as a multiobjective optimization problem for the first time. Based on this, a new algorithm for nonorthogonal joint diagonalization is developed. The new algorithm yields diagonalizers which not only minimize the diagonalization error but also have as small condition numbers as possible. Meanwhile, degenerate solutions are avoided strictly. Besides, the new algorithm imposes few restrictions on the target set of matrices to be diagonalized, which makes it widely applicable. Primary results on convergence are presented and we also show that, for exactly jointly diagonalizable sets, no local minima exist and the solutions are unique under mild conditions. Extensive numerical simulations illustrate the performance of the algorithm and provide comparison with other leading diagonalization methods. The practical use of our algorithm is shown for blind source separation (BSS) problems, especially when ill-conditioned mixing matrices are involved

    Penalty function-based joint diagonalization approach for convolutive blind separation of nonstationary sources

    Get PDF
    A new approach for convolutive blind source separation (BSS) by explicitly exploiting the second-order nonstationarity of signals and operating in the frequency domain is proposed. The algorithm accommodates a penalty function within the cross-power spectrum-based cost function and thereby converts the separation problem into a joint diagonalization problem with unconstrained optimization. This leads to a new member of the family of joint diagonalization criteria and a modification of the search direction of the gradient-based descent algorithm. Using this approach, not only can the degenerate solution induced by a unmixing matrix and the effect of large errors within the elements of covariance matrices at low-frequency bins be automatically removed, but in addition, a unifying view to joint diagonalization with unitary or nonunitary constraint is provided. Numerical experiments are presented to verify the performance of the new method, which show that a suitable penalty function may lead the algorithm to a faster convergence and a better performance for the separation of convolved speech signals, in particular, in terms of shape preservation and amplitude ambiguity reduction, as compared with the conventional second-order based algorithms for convolutive mixtures that exploit signal nonstationarity

    Gradient Flow Based Matrix Joint Diagonalization for Independent Componenet Analysis

    Get PDF
    In this thesis, employing the theory of matrix Lie groups, we develop gradient based flows for the problem of Simultaneous or Joint Diagonalization (JD) of a set of symmetric matrices. This problem has applications in many fields especially in the field of Independent Component Analysis (ICA). We consider both orthogonal and non-orthogonal JD. We view the JD problem as minimization of a common quadric cost function on a matrix group. We derive gradient based flows together with suitable discretizations for minimization of this cost function on the Riemannian manifolds of O(n) and GL(n).\\ We use the developed JD methods to introduce a new class of ICA algorithms that sphere the data, however do not restrict the subsequent search for the un-mixing matrix to orthogonal matrices. These methods provide robust ICA algorithms in Gaussian noise by making effective use of both second and higher order statistics
    corecore