5 research outputs found

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Convergence bounds for empirical nonlinear least-squares

    Get PDF
    We consider best approximation problems in a nonlinear subset of a Banach space of functions. The norm is assumed to be a generalization of the L2 norm for which only a weighted Monte Carlo estimate can be computed. The objective is to obtain an approximation of an unknown target function by minimizing the empirical norm. In the case of linear subspaces it is well-known that such least squares approximations can become inaccurate and unstable when the number of samples is too close to the number of parameters. We review this statement for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and we show sufficient conditions for the RIP to be satisfied with high probability. Several model classes are examined where analytical statements can be made about the RIP. Numerical experiments illustrate some of the obtained stability bounds

    Convergence bounds for empirical nonlinear least-squares

    Get PDF
    We consider best approximation problems in a nonlinear subset M\mathcal{M} of a Banach space of functions (V,)(\mathcal{V},\|\bullet\|). The norm is assumed to be a generalization of the L2L^2-norm for which only a weighted Monte Carlo estimate n\|\bullet\|_n can be computed. The objective is to obtain an approximation vMv\in\mathcal{M} of an unknown function uVu \in \mathcal{V} by minimizing the empirical norm uvn\|u-v\|_n. In the case of linear subspaces M\mathcal{M} it is well-known that such least squares approximations can become inaccurate and unstable when the number of samples nn is too close to the number of parameters m=dim(M)m = \operatorname{dim}(\mathcal{M}). We review this statement for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and we show that nmn \gtrsim m is sufficient for the RIP to be satisfied with high probability. Several model classes are examined where analytical statements can be made about the RIP. Numerical experiments illustrate some of the obtained stability bounds.Comment: 32 pages, 18 figures; major revision

    Tensor Networks and Hierarchical Tensors for the Solution of High-dimensional Partial Differential Equations

    Get PDF
    Hierarchical tensors can be regarded as a generalisation, preserving many crucial features, of the singular value decomposition to higher-order tensors. For a given tensor product space, a recursive decomposition of the set of coordinates into a dimension tree gives a hierarchy of nested subspaces and corresponding nested bases. The dimensions of these subspaces yield a notion of multilinear rank. This rank tuple, as well as quasi-optimal low-rank approximations by rank truncation, can be obtained by a hierarchical singular value decomposition. For fixed multilinear ranks, the storage and operation complexity of these hierarchical representations scale only linearly in the order of the tensor. As in the matrix case, the set of hierarchical tensors of a given multilinear rank is not a convex set, but forms an open smooth manifold. A number of techniques for the computation of low-rank approximations have been developed, including local optimisation techniques on Riemannian manifolds as well as truncated iteration methods, which can be applied for solving high-dimensional partial differential equations. In a number of important cases, quasi-optimality of approximation ranks and computational complexity have been analysed. This article gives a survey of these developments. We also discuss applications to problems in uncertainty quantification, to the solution of the electronic Schrödinger equation in the strongly correlated regime, and to the computation of metastable states in molecular dynamics
    corecore