15 research outputs found
Symplectic analogs of polar decomposition and their applications to bosonic Gaussian channels
We obtain several analogs of real polar decomposition for even dimensional
matrices. In particular, we decompose a non-degenerate matrix as a product of a
Hamiltonian and an anti-symplectic matrix and under additional requirements we
decompose a matrix as a skew-Hamiltonian and a symplectic matrix. We apply our
results to study bosonic Gaussian channels up to inhomogeneous symplectic
transforms
Symmetric spaces and Lie triple systems in numerical analysis of differential equations
A remarkable number of different numerical algorithms can be understood and
analyzed using the concepts of symmetric spaces and Lie triple systems, which
are well known in differential geometry from the study of spaces of constant
curvature and their tangents. This theory can be used to unify a range of
different topics, such as polar-type matrix decompositions, splitting methods
for computation of the matrix exponential, composition of selfadjoint numerical
integrators and dynamical systems with symmetries and reversing symmetries. The
thread of this paper is the following: involutive automorphisms on groups
induce a factorization at a group level, and a splitting at the algebra level.
In this paper we will give an introduction to the mathematical theory behind
these constructions, and review recent results. Furthermore, we present a new
Yoshida-like technique, for self-adjoint numerical schemes, that allows to
increase the order of preservation of symmetries by two units. Since all the
time-steps are positive, the technique is particularly suited to stiff
problems, where a negative time-step can cause instabilities
Symmetric spaces and Lie triple systems in numerical analysis of differential equations
A remarkable number of different numerical algorithms can be understood and analyzed using the concepts of symmetric spaces and Lie triple systems, which are well known in differential geometry from the study of spaces of constant curvature and their tangents. This theory can be used to unify a range of different topics, such as polar-type matrix decompositions, splitting methods for computation of the matrix exponential, composition of selfadjoint numerical integrators and dynamical systems with symmetries and reversing symmetries. The thread of this paper is the following: involutive automorphisms on groups induce a factorization at a group level, and a splitting at the algebra level. In this paper we will give an introduction to the mathematical theory behind these constructions, and review recent results. Furthermore, we present a new Yoshida-like technique, for self-adjoint numerical schemes, that allows to increase the order of preservation of symmetries by two units. The proposed techniques has the property that all the time-steps are positive.publishedVersio
A Structure-Preserving Divide-and-Conquer Method for Pseudosymmetric Matrices
We devise a spectral divide-and-conquer scheme for matrices that are
self-adjoint with respect to a given indefinite scalar product (i.e.
pseudosymmetic matrices). The pseudosymmetric structure of the matrix is
preserved in the spectral division, such that the method can be applied
recursively to achieve full diagonalization. The method is well-suited for
structured matrices that come up in computational quantum physics and
chemistry. In this application context, additional definiteness properties
guarantee a convergence of the matrix sign function iteration within two steps
when Zolotarev functions are used. The steps are easily parallelizable.
Furthermore, it is shown that the matrix decouples into symmetric definite
eigenvalue problems after just one step of spectral division
Efficient Algorithms for Solving Structured Eigenvalue Problems Arising in the Description of Electronic Excitations
Matrices arising in linear-response time-dependent density functional theory and many-body perturbation theory, in particular in the Bethe-Salpeter approach, show a 2 × 2 block structure. The motivation to devise new algorithms, instead of using general purpose eigenvalue solvers, comes from the need to solve large problems on high performance computers. This requires parallelizable and communication-avoiding algorithms and implementations. We point out various novel directions for diagonalizing structured matrices. These include the solution of skew-symmetric eigenvalue problems in ELPA, as well as structure preserving spectral divide-and-conquer schemes employing generalized polar decompostions