1,711 research outputs found

    A sparse decomposition of low rank symmetric positive semi-definite matrices

    Get PDF
    Suppose that A∈RNΓ—NA \in \mathbb{R}^{N \times N} is symmetric positive semidefinite with rank K≀NK \le N. Our goal is to decompose AA into KK rank-one matrices βˆ‘k=1KgkgkT\sum_{k=1}^K g_k g_k^T where the modes {gk}k=1K\{g_{k}\}_{k=1}^K are required to be as sparse as possible. In contrast to eigen decomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where AA is the covariance function and is intractable to solve in general. In this paper, we partition the indices from 1 to NN into several patches and propose to quantify the sparseness of a vector by the number of patches on which it is nonzero, which is called patch-wise sparseness. Our aim is to find the decomposition which minimizes the total patch-wise sparseness of the decomposed modes. We propose a domain-decomposition type method, called intrinsic sparse mode decomposition (ISMD), which follows the "local-modes-construction + patching-up" procedure. The key step in the ISMD is to construct local pieces of the intrinsic sparse modes by a joint diagonalization problem. Thereafter a pivoted Cholesky decomposition is utilized to glue these local pieces together. Optimal sparse decomposition, consistency with different domain decomposition and robustness to small perturbation are proved under the so called regular-sparse assumption (see Definition 1.2). We provide simulation results to show the efficiency and robustness of the ISMD. We also compare the ISMD to other existing methods, e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation of sparse principal component analysis [25] and [40]

    Nonorthogonal approximate joint diagonalization with well-conditioned diagonalizers

    Full text link
    To make the results reasonable, existing joint diagonalization algorithms have imposed a variety of constraints on diagonalizers. Actually, those constraints can be imposed uniformly by minimizing the condition number of diagonalizers. Motivated by this, the approximate joint diagonalization problem is reviewed as a multiobjective optimization problem for the first time. Based on this, a new algorithm for nonorthogonal joint diagonalization is developed. The new algorithm yields diagonalizers which not only minimize the diagonalization error but also have as small condition numbers as possible. Meanwhile, degenerate solutions are avoided strictly. Besides, the new algorithm imposes few restrictions on the target set of matrices to be diagonalized, which makes it widely applicable. Primary results on convergence are presented and we also show that, for exactly jointly diagonalizable sets, no local minima exist and the solutions are unique under mild conditions. Extensive numerical simulations illustrate the performance of the algorithm and provide comparison with other leading diagonalization methods. The practical use of our algorithm is shown for blind source separation (BSS) problems, especially when ill-conditioned mixing matrices are involved

    Non-orthogonal joint block diagonalization based on the LU or QR factorizations for convolutive blind source separation

    Get PDF
    This article addresses the problem of blind source separation, in which the source signals are most often of the convolutive mixtures, and moreover, the source signals cannot satisfy independent identical distribution generally. One kind of prevailing and representative approaches for overcoming these difficulties is joint block diagonalization (JBD) method. To improve present JBD methods, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations. Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems. The novel methods are more general i.e. the orthogonal, positive definite or symmetric matrices and a preliminary whitening stage is no more compulsorily required, and further, the convergence is also guaranteed. The performance of the proposed algorithms, compared with the existing state-of-the-art JBD algorithms, is evaluated with computer simulations and vibration experimental. The results of numerical examples demonstrate that the robustness and effectiveness of the two novel algorithms provide a significant improvement i.e., yield less convergence time, higher precision of convergence, better success rate of block diagonalization. And the proposed algorithms are effective in separating the vibration signals of convolutive mixtures

    Spectral methods for multimodal data analysis

    Get PDF
    Spectral methods have proven themselves as an important and versatile tool in a wide range of problems in the fields of computer graphics, machine learning, pattern recognition, and computer vision, where many important problems boil down to constructing a Laplacian operator and finding a few of its eigenvalues and eigenfunctions. Classical examples include the computation of diffusion distances on manifolds in computer graphics, Laplacian eigenmaps, and spectral clustering in machine learning. In many cases, one has to deal with multiple data spaces simultaneously. For example, clustering multimedia data in machine learning applications involves various modalities or ``views'' (e.g., text and images), and finding correspondence between shapes in computer graphics problems is an operation performed between two or more modalities. In this thesis, we develop a generalization of spectral methods to deal with multiple data spaces and apply them to problems from the domains of computer graphics, machine learning, and image processing. Our main construction is based on simultaneous diagonalization of Laplacian operators. We present an efficient numerical technique for computing joint approximate eigenvectors of two or more Laplacians in challenging noisy scenarios, which also appears to be the first general non-smooth manifold optimization method. Finally, we use the relation between joint approximate diagonalizability and approximate commutativity of operators to define a structural similarity measure for images. We use this measure to perform structure-preserving color manipulations of a given image

    Diagonalization of replicated transfer matrices for disordered Ising spin systems

    Full text link
    We present an alternative procedure for solving the eigenvalue problem of replicated transfer matrices describing disordered spin systems with (random) 1D nearest neighbor bonds and/or random fields, possibly in combination with (random) long range bonds. Our method is based on transforming the original eigenvalue problem for a 2nΓ—2n2^n\times 2^n matrix (where nβ†’0n\to 0) into an eigenvalue problem for integral operators. We first develop our formalism for the Ising chain with random bonds and fields, where we recover known results. We then apply our methods to models of spins which interact simultaneously via a one-dimensional ring and via more complex long-range connectivity structures, e.g. 1+∞1+\infty dimensional neural networks and `small world' magnets. Numerical simulations confirm our predictions satisfactorily.Comment: 24 pages, LaTex, IOP macro
    • …
    corecore