99 research outputs found

    Fast truncation of mode ranks for bilinear tensor operations

    Full text link
    We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3-tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3+r4)O(nr^3 + r^4). The algorithm is based on the cross approximation of Gram matrices, and the accuracy of the resulted Tucker approximation is limited by square root of machine precision.Comment: 9 pages, 2 tables. Submitted to Numerical Linear Algebra and Applications, special edition for ICSMT conference, Hong Kong, January 201

    Randomized Rounding for the Largest Simplex Problem

    Full text link
    The maximum volume jj-simplex problem asks to compute the jj-dimensional simplex of maximum volume inside the convex hull of a given set of nn points in Qd\mathbb{Q}^d. We give a deterministic approximation algorithm for this problem which achieves an approximation ratio of ej/2+o(j)e^{j/2 + o(j)}. The problem is known to be NP\mathrm{NP}-hard to approximate within a factor of cjc^{j} for some constant c>1c > 1. Our algorithm also gives a factor ej+o(j)e^{j + o(j)} approximation for the problem of finding the principal jĂ—jj\times j submatrix of a rank dd positive semidefinite matrix with the largest determinant. We achieve our approximation by rounding solutions to a generalization of the DD-optimal design problem, or, equivalently, the dual of an appropriate smallest enclosing ellipsoid problem. Our arguments give a short and simple proof of a restricted invertibility principle for determinants

    Quasioptimality of maximum-volume cross interpolation of tensors

    Get PDF
    We consider a cross interpolation of high-dimensional arrays in the tensor train format. We prove that the maximum-volume choice of the interpolation sets provides the quasioptimal interpolation accuracy, that differs from the best possible accuracy by the factor which does not grow exponentially with dimension. For nested interpolation sets we prove the interpolation property and propose greedy cross interpolation algorithms. We justify the theoretical results and measure speed and accuracy of the proposed algorithm with numerical experiments

    Parallel cross interpolation for high-precision calculation of high-dimensional integrals

    Get PDF
    We propose a parallel version of the cross interpolation algorithm and apply it to calculate high-dimensional integrals motivated by Ising model in quantum physics. In contrast to mainstream approaches, such as Monte Carlo and quasi Monte Carlo, the samples calculated by our algorithm are neither random nor form a regular lattice. Instead we calculate the given function along individual dimensions (modes) and use these values to reconstruct its behaviour in the whole domain. The positions of the calculated univariate fibres are chosen adaptively for the given function. The required evaluations can be executed in parallel along each mode (variable) and over all modes. To demonstrate the efficiency of the proposed method, we apply it to compute high-dimensional Ising susceptibility integrals, arising from asymptotic expansions for the spontaneous magnetisation in two-dimensional Ising model of ferromagnetism. We observe strong superlinear convergence of the proposed method, while the MC and qMC algorithms converge sublinearly. Using multiple precision arithmetic, we also observe exponential convergence of the proposed algorithm. Combining high-order convergence, almost perfect scalability up to hundreds of processes, and the same flexibility as MC and qMC, the proposed algorithm can be a new method of choice for problems involving high-dimensional integration, e.g. in statistics, probability, and quantum physics

    Comparison of some Reduced Representation Approximations

    Full text link
    In the field of numerical approximation, specialists considering highly complex problems have recently proposed various ways to simplify their underlying problems. In this field, depending on the problem they were tackling and the community that are at work, different approaches have been developed with some success and have even gained some maturity, the applications can now be applied to information analysis or for numerical simulation of PDE's. At this point, a crossed analysis and effort for understanding the similarities and the differences between these approaches that found their starting points in different backgrounds is of interest. It is the purpose of this paper to contribute to this effort by comparing some constructive reduced representations of complex functions. We present here in full details the Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM) together with other approaches that enter in the same category

    Application of hierarchical matrices for computing the Karhunen-Loève expansion

    Get PDF
    Realistic mathematical models of physical processes contain uncertainties. These models are often described by stochastic differential equations (SDEs) or stochastic partial differential equations (SPDEs) with multiplicative noise. The uncertainties in the right-hand side or the coefficients are represented as random fields. To solve a given SPDE numerically one has to discretise the deterministic operator as well as the stochastic fields. The total dimension of the SPDE is the product of the dimensions of the deterministic part and the stochastic part. To approximate random fields with as few random variables as possible, but still retaining the essential information, the Karhunen-Lo`eve expansion (KLE) becomes important. The KLE of a random field requires the solution of a large eigenvalue problem. Usually it is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of sparse hierarchical matrix techniques for this. A log-linear computational cost of the matrix-vector product and a log-linear storage requirement yield an efficient and fast discretisation of the random fields presented
    • …
    corecore