339 research outputs found

    Non-orthogonal joint block diagonalization based on the LU or QR factorizations for convolutive blind source separation

    Get PDF
    This article addresses the problem of blind source separation, in which the source signals are most often of the convolutive mixtures, and moreover, the source signals cannot satisfy independent identical distribution generally. One kind of prevailing and representative approaches for overcoming these difficulties is joint block diagonalization (JBD) method. To improve present JBD methods, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations. Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems. The novel methods are more general i.e. the orthogonal, positive definite or symmetric matrices and a preliminary whitening stage is no more compulsorily required, and further, the convergence is also guaranteed. The performance of the proposed algorithms, compared with the existing state-of-the-art JBD algorithms, is evaluated with computer simulations and vibration experimental. The results of numerical examples demonstrate that the robustness and effectiveness of the two novel algorithms provide a significant improvement i.e., yield less convergence time, higher precision of convergence, better success rate of block diagonalization. And the proposed algorithms are effective in separating the vibration signals of convolutive mixtures

    Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    Get PDF
    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods

    A sparse decomposition of low rank symmetric positive semi-definite matrices

    Get PDF
    Suppose that A∈RNΓ—NA \in \mathbb{R}^{N \times N} is symmetric positive semidefinite with rank K≀NK \le N. Our goal is to decompose AA into KK rank-one matrices βˆ‘k=1KgkgkT\sum_{k=1}^K g_k g_k^T where the modes {gk}k=1K\{g_{k}\}_{k=1}^K are required to be as sparse as possible. In contrast to eigen decomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where AA is the covariance function and is intractable to solve in general. In this paper, we partition the indices from 1 to NN into several patches and propose to quantify the sparseness of a vector by the number of patches on which it is nonzero, which is called patch-wise sparseness. Our aim is to find the decomposition which minimizes the total patch-wise sparseness of the decomposed modes. We propose a domain-decomposition type method, called intrinsic sparse mode decomposition (ISMD), which follows the "local-modes-construction + patching-up" procedure. The key step in the ISMD is to construct local pieces of the intrinsic sparse modes by a joint diagonalization problem. Thereafter a pivoted Cholesky decomposition is utilized to glue these local pieces together. Optimal sparse decomposition, consistency with different domain decomposition and robustness to small perturbation are proved under the so called regular-sparse assumption (see Definition 1.2). We provide simulation results to show the efficiency and robustness of the ISMD. We also compare the ISMD to other existing methods, e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation of sparse principal component analysis [25] and [40]

    Spectral methods for multimodal data analysis

    Get PDF
    Spectral methods have proven themselves as an important and versatile tool in a wide range of problems in the fields of computer graphics, machine learning, pattern recognition, and computer vision, where many important problems boil down to constructing a Laplacian operator and finding a few of its eigenvalues and eigenfunctions. Classical examples include the computation of diffusion distances on manifolds in computer graphics, Laplacian eigenmaps, and spectral clustering in machine learning. In many cases, one has to deal with multiple data spaces simultaneously. For example, clustering multimedia data in machine learning applications involves various modalities or ``views'' (e.g., text and images), and finding correspondence between shapes in computer graphics problems is an operation performed between two or more modalities. In this thesis, we develop a generalization of spectral methods to deal with multiple data spaces and apply them to problems from the domains of computer graphics, machine learning, and image processing. Our main construction is based on simultaneous diagonalization of Laplacian operators. We present an efficient numerical technique for computing joint approximate eigenvectors of two or more Laplacians in challenging noisy scenarios, which also appears to be the first general non-smooth manifold optimization method. Finally, we use the relation between joint approximate diagonalizability and approximate commutativity of operators to define a structural similarity measure for images. We use this measure to perform structure-preserving color manipulations of a given image
    • …
    corecore