7,508 research outputs found

    A Matrix Hyperbolic Cosine Algorithm and Applications

    Full text link
    In this paper, we generalize Spencer's hyperbolic cosine algorithm to the matrix-valued setting. We apply the proposed algorithm to several problems by analyzing its computational efficiency under two special cases of matrices; one in which the matrices have a group structure and an other in which they have rank-one. As an application of the former case, we present a deterministic algorithm that, given the multiplication table of a finite group of size nn, it constructs an expanding Cayley graph of logarithmic degree in near-optimal O(n^2 log^3 n) time. For the latter case, we present a fast deterministic algorithm for spectral sparsification of positive semi-definite matrices, which implies an improved deterministic algorithm for spectral graph sparsification of dense graphs. In addition, we give an elementary connection between spectral sparsification of positive semi-definite matrices and element-wise matrix sparsification. As a consequence, we obtain improved element-wise sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work in (current) Section

    A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions

    Full text link
    A new algorithm is derived for computing the actions f(tA)Bf(tA)B and f(tA1/2)Bf(tA^{1/2})B, where ff is cosine, sinc, sine, hyperbolic cosine, hyperbolic sinc, or hyperbolic sine function. AA is an n×nn\times n matrix and BB is n×n0n\times n_0 with n0≪nn_0 \ll n. A1/2A^{1/2} denotes any matrix square root of AA and it is never required to be computed. The algorithm offers six independent output options given tt, AA, BB, and a tolerance. For each option, actions of a pair of trigonometric or hyperbolic matrix functions are simultaneously computed. The algorithm scales the matrix AA down by a positive integer ss, approximates f(s−1tA)Bf(s^{-1}tA)B by a truncated Taylor series, and finally uses the recurrences of the Chebyshev polynomials of the first and second kind to recover f(tA)Bf(tA)B. The selection of the scaling parameter and the degree of Taylor polynomial are based on a forward error analysis and a sequence of the form ∥Ak∥1/k\|A^k\|^{1/k} in such a way the overall computational cost of the algorithm is optimized. Shifting is used where applicable as a preprocessing step to reduce the scaling parameter. The algorithm works for any matrix AA and its computational cost is dominated by the formation of products of AA with n×n0n\times n_0 matrices that could take advantage of the implementation of level-3 BLAS. Our numerical experiments show that the new algorithm behaves in a forward stable fashion and in most problems outperforms the existing algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page
    • …
    corecore