18,204 research outputs found

    The geometric mean of two matrices from a computational viewpoint

    Full text link
    The geometric mean of two matrices is considered and analyzed from a computational viewpoint. Some useful theoretical properties are derived and an analysis of the conditioning is performed. Several numerical algorithms based on different properties and representation of the geometric mean are discussed and analyzed and it is shown that most of them can be classified in terms of the rational approximations of the inverse square root functions. A review of the relevant applications is given

    Symmetric spaces and Lie triple systems in numerical analysis of differential equations

    Get PDF
    A remarkable number of different numerical algorithms can be understood and analyzed using the concepts of symmetric spaces and Lie triple systems, which are well known in differential geometry from the study of spaces of constant curvature and their tangents. This theory can be used to unify a range of different topics, such as polar-type matrix decompositions, splitting methods for computation of the matrix exponential, composition of selfadjoint numerical integrators and dynamical systems with symmetries and reversing symmetries. The thread of this paper is the following: involutive automorphisms on groups induce a factorization at a group level, and a splitting at the algebra level. In this paper we will give an introduction to the mathematical theory behind these constructions, and review recent results. Furthermore, we present a new Yoshida-like technique, for self-adjoint numerical schemes, that allows to increase the order of preservation of symmetries by two units. Since all the time-steps are positive, the technique is particularly suited to stiff problems, where a negative time-step can cause instabilities

    Time Reversal and n-qubit Canonical Decompositions

    Full text link
    For n an even number of qubits and v a unitary evolution, a matrix decomposition v=k1 a k2 of the unitary group is explicitly computable and allows for study of the dynamics of the concurrence entanglement monotone. The side factors k1 and k2 of this Concurrence Canonical Decomposition (CCD) are concurrence symmetries, so the dynamics reduce to consideration of the a factor. In this work, we provide an explicit numerical algorithm computing v=k1 a k2 for n odd. Further, in the odd case we lift the monotone to a two-argument function, allowing for a theory of concurrence dynamics in odd qubits. The generalization may also be studied using the CCD, leading again to maximal concurrence capacity for most unitaries. The key technique is to consider the spin-flip as a time reversal symmetry operator in Wigner's axiomatization; the original CCD derivation may be restated entirely in terms of this time reversal. En route, we observe a Kramers' nondegeneracy: the existence of a nondegenerate eigenstate of any time reversal symmetric n-qubit Hamiltonian demands (i) n even and (ii) maximal concurrence of said eigenstate. We provide examples of how to apply this work to study the kinematics and dynamics of entanglement in spin chain Hamiltonians.Comment: 20 pages, 3 figures; v2 (17pp.): major revision, new abstract, introduction, expanded bibliograph

    Generalized power method for sparse principal component analysis

    Get PDF
    In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.Comment: Submitte

    Optimal map of the modular structure of complex networks

    Full text link
    Modular structure is pervasive in many complex networks of interactions observed in natural, social and technological sciences. Its study sheds light on the relation between the structure and function of complex systems. Generally speaking, modules are islands of highly connected nodes separated by a relatively small number of links. Every module can have contributions of links from any node in the network. The challenge is to disentangle these contributions to understand how the modular structure is built. The main problem is that the analysis of a certain partition into modules involves, in principle, as many data as number of modules times number of nodes. To confront this challenge, here we first define the contribution matrix, the mathematical object containing all the information about the partition of interest, and after, we use a Truncated Singular Value Decomposition to extract the best representation of this matrix in a plane. The analysis of this projection allow us to scrutinize the skeleton of the modular structure, revealing the structure of individual modules and their interrelations.Comment: 21 pages, 10 figure

    Generalized power method for sparse principal component analysis

    Get PDF
    In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.sparse PCA, power method, gradient ascent, strongly convex sets, block algorithms.
    corecore