64 research outputs found
The seriation problem in the presence of a double Fiedler value
Seriation is a problem consisting of seeking the best enumeration order of a set of units whose interrelationship is described by a bipartite graph. An algorithm for spectral seriation based on the use of the Fiedler vector of the Laplacian matrix associated to the problem was developed by Atkins et al. under the assumption that the Fiedler value is simple. In this paper, we analyze the case in which the Fiedler value of the Laplacian is not simple, discuss its effect on the set of the admissible solutions, and study possible approaches to actually perform the computation. Examples and numerical experiments illustrate the effectiveness of the proposed methods
A survey and comparison of contemporary algorithms for computing the matrix geometric mean
In this paper we present a survey of various algorithms for computing matrix geometric means and derive new second-order optimization algorithms to compute the Karcher mean. These new algorithms are constructed using the standard definition of the Riemannian Hessian. The survey includes the ALM list of desired properties for a geometric mean, the analytical expression for the mean of two matrices, algorithms based on the centroid computation in Euclidean (flat) space, and Riemannian optimization techniques to compute the Karcher mean (preceded by a short introduction into differential geometry). A change of metric is considered in the optimization techniques to reduce the complexity of the structures used in these algorithms. Numerical experiments are presented to compare the existing and the newly developed algorithms. We conclude that currently first-order algorithms are best suited for this optimization problem as the size and/or number of the matrices increase. Copyright © 2012, Kent State University
Adaptive cross approximation for ill-posed problems
Integral equations of the first kind with a smooth kernel and perturbed right-hand side, which represents available contaminated data, arise in many applications. Discretization gives rise to linear systems of equations with a matrix whose singular values cluster at the origin. The solution of these systems of equations requires regularization, which has the effect that components in the computed solution connected to singular vectors associated with small singular values are damped or ignored. In order to compute a useful approximate solution typically approximations of only a fairly small number of the largest singular values and associated singular vectors of the matrix are required. The present paper explores the possibility of determining these approximate singular values and vectors by adaptive cross approximation. This approach is particularly useful when a fine discretization of the integral equation is required and the resulting linear system of equations is of large dimensions, because adaptive cross approximation makes it possible to compute only fairly few of the matrix entries
Computing with quasiseparable matrices
International audienceThe class of quasiseparable matrices is defined by a pair of bounds, called the quasiseparable orders, on the ranks of the maximal sub-matrices entirely located in their strictly lower and upper triangular parts. These arise naturally in applications, as e.g. the inverse of band matrices, and are widely used for they admit structured representations allowing to compute with them in time linear in the dimension and quadratic with the quasiseparable order. We show, in this paper, the connection between the notion of quasisepa-rability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC'15]. This allows us to propose an algorithm computing the quasiseparable orders (rL, rU) in time O(n^2 s^(ω−2)) where s = max(rL, rU) and ω the exponent of matrix multiplication. We then present two new structured representations, a binary tree of PLUQ decompositions, and the Bruhat generator, using respectively O(ns log n/s) and O(ns) field elements instead of O(ns^2) for the previously known generators. We present algorithms computing these representations in time O(n^2 s^(ω−2)). These representations allow a matrix-vector product in time linear in the size of their representation. Lastly we show how to multiply two such structured matrices in time O(n^2 s^(ω−2))
Semiseparable integral operators and explicit solution of an inverse problem for the skew-self-adjoint Dirac-type system
Inverse problem to recover the skew-self-adjoint Dirac-type system from the
generalized Weyl matrix function is treated in the paper. Sufficient conditions
under which the unique solution of the inverse problem exists, are formulated
in terms of the Weyl function and a procedure to solve the inverse problem is
given. The case of the generalized Weyl functions of the form
, where is a strictly proper rational
matrix function and is a diagonal matrix, is treated in greater
detail. Explicit formulas for the inversion of the corresponding semiseparable
integral operators and recovery of the Dirac-type system are obtained for this
case
An implicit multishift QR-algorithm for symmetric plus low rank matrices
Hermitian plus possibly non-Hermitian low rank matrices can be efficiently reduced into Hessenberg form. The resulting Hessenberg matrix can still be written as the sum of a Hermitian plus low rank matrix. In this paper we develop a new implicit multishift QR-algorithm for Hessenberg matrices, which are the sum of a Hermitian plus a possibly non-Hermitian low rank correction. The proposed algorithm exploits both the symmetry and low rank structure to obtain a QR-step involving only O(n) floating point operations instead of the standard O(n(2)) operations needed for performing a QR-step on a Hessenberg matrix. The algorithm is based on a suitable O(n) representation of the Hessenberg matrix. The low rank parts present in both the Hermitian and low rank part of the sum are compactly stored by a sequence of Givens transformations and a few vectors. Due to the new representation, we cannot apply classical deflation techniques for Hessenberg matrices. A new, efficient technique is developed to overcome this problem. Some numerical experiments based on matrices arising in applications are performed. The experiments illustrate effectiveness and accuracy of both the QR-algorithm and the newly developed deflation technique
Recommended from our members
An implicit filter for rational Krylov using core transformations
The rational Krylov method is a powerful tool for computing a selected subset of eigenvalues in large-scale eigenvalue problems. In this paper we study a method to implicitly apply a filter in a rational Krylov iteration by directly acting on a QR factorized representation of the Hessenberg pair from the rational Krylov method. This filter is used to restart the iteration, which is generally required to limit the orthogonalization and storage costs. The contribution in this paper is threefold. We reformulate existing procedures in terms of operations on core transformations. This has the advantage of improved convergence monitoring. Secondly, we demonstrate that the extended QZ method is a special case of this more general method. Finally, numerical experiments show the validity and the increased accuracy of the new approach compared with existing methods
- …