3,275 research outputs found
Efficient Computation of the Characteristic Polynomial
This article deals with the computation of the characteristic polynomial of
dense matrices over small finite fields and over the integers. We first present
two algorithms for the finite fields: one is based on Krylov iterates and
Gaussian elimination. We compare it to an improvement of the second algorithm
of Keller-Gehrig. Then we show that a generalization of Keller-Gehrig's third
algorithm could improve both complexity and computational time. We use these
results as a basis for the computation of the characteristic polynomial of
integer matrices. We first use early termination and Chinese remaindering for
dense matrices. Then a probabilistic approach, based on integer minimal
polynomial and Hensel factorization, is particularly well suited to sparse
and/or structured matrices
Computing Minimal Polynomials of Matrices
We present and analyse a Monte-Carlo algorithm to compute the minimal
polynomial of an matrix over a finite field that requires
field operations and O(n) random vectors, and is well suited for successful
practical implementation. The algorithm, and its complexity analysis, use
standard algorithms for polynomial and matrix operations. We compare features
of the algorithm with several other algorithms in the literature. In addition
we present a deterministic verification procedure which is similarly efficient
in most cases but has a worst-case complexity of . Finally, we report
the results of practical experiments with an implementation of our algorithms
in comparison with the current algorithms in the {\sf GAP} library
Low rank tensor recovery via iterative hard thresholding
We study extensions of compressive sensing and low rank matrix recovery
(matrix completion) to the recovery of low rank tensors of higher order from a
small number of linear measurements. While the theoretical understanding of low
rank matrix recovery is already well-developed, only few contributions on the
low rank tensor recovery problem are available so far. In this paper, we
introduce versions of the iterative hard thresholding algorithm for several
tensor decompositions, namely the higher order singular value decomposition
(HOSVD), the tensor train format (TT), and the general hierarchical Tucker
decomposition (HT). We provide a partial convergence result for these
algorithms which is based on a variant of the restricted isometry property of
the measurement operator adapted to the tensor decomposition at hand that
induces a corresponding notion of tensor rank. We show that subgaussian
measurement ensembles satisfy the tensor restricted isometry property with high
probability under a certain almost optimal bound on the number of measurements
which depends on the corresponding tensor format. These bounds are extended to
partial Fourier maps combined with random sign flips of the tensor entries.
Finally, we illustrate the performance of iterative hard thresholding methods
for tensor recovery via numerical experiments where we consider recovery from
Gaussian random measurements, tensor completion (recovery of missing entries),
and Fourier measurements for third order tensors.Comment: 34 page
Tensor-Structured Coupled Cluster Theory
We derive and implement a new way of solving coupled cluster equations with
lower computational scaling. Our method is based on decomposition of both
amplitudes and two electron integrals, using a combination of tensor
hypercontraction and canonical polyadic decomposition. While the original
theory scales as with respect to the number of basis functions, we
demonstrate numerically that we achieve sub-millihartree difference from the
original theory with scaling. This is accomplished by solving directly
for the factors that decompose the cluster operator. The proposed scheme is
quite general and can be easily extended to other many-body methods
- …