7,508 research outputs found
A Matrix Hyperbolic Cosine Algorithm and Applications
In this paper, we generalize Spencer's hyperbolic cosine algorithm to the
matrix-valued setting. We apply the proposed algorithm to several problems by
analyzing its computational efficiency under two special cases of matrices; one
in which the matrices have a group structure and an other in which they have
rank-one. As an application of the former case, we present a deterministic
algorithm that, given the multiplication table of a finite group of size ,
it constructs an expanding Cayley graph of logarithmic degree in near-optimal
O(n^2 log^3 n) time. For the latter case, we present a fast deterministic
algorithm for spectral sparsification of positive semi-definite matrices, which
implies an improved deterministic algorithm for spectral graph sparsification
of dense graphs. In addition, we give an elementary connection between spectral
sparsification of positive semi-definite matrices and element-wise matrix
sparsification. As a consequence, we obtain improved element-wise
sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work
in (current) Section
A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions
A new algorithm is derived for computing the actions and
, where is cosine, sinc, sine, hyperbolic cosine, hyperbolic
sinc, or hyperbolic sine function. is an matrix and is
with . denotes any matrix square root of
and it is never required to be computed. The algorithm offers six independent
output options given , , , and a tolerance. For each option, actions
of a pair of trigonometric or hyperbolic matrix functions are simultaneously
computed. The algorithm scales the matrix down by a positive integer ,
approximates by a truncated Taylor series, and finally uses the
recurrences of the Chebyshev polynomials of the first and second kind to
recover . The selection of the scaling parameter and the degree of
Taylor polynomial are based on a forward error analysis and a sequence of the
form in such a way the overall computational cost of the
algorithm is optimized. Shifting is used where applicable as a preprocessing
step to reduce the scaling parameter. The algorithm works for any matrix
and its computational cost is dominated by the formation of products of
with matrices that could take advantage of the implementation of
level-3 BLAS. Our numerical experiments show that the new algorithm behaves in
a forward stable fashion and in most problems outperforms the existing
algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page
- …