959 research outputs found

    The average number of critical rank-one approximations to a tensor

    Get PDF
    Motivated by the many potential applications of low-rank multi-way tensor approximations, we set out to count the rank-one tensors that are critical points of the distance function to a general tensor v. As this count depends on v, we average over v drawn from a Gaussian distribution, and find formulas that relates this average to problems in random matrix theory.Comment: Several minor edit

    Approximation of multi-variable signals and systems : a tensor decomposition approach

    Get PDF
    Signals that evolve over multiple variables or indices occur in all fields of science and engineering. Measurements of the distribution of temperature across the globe during a certain period of time are an example of such a signal. Multivariable systems describe the evolution of signals over a spatial-temporal domain. The mathematical equations involved in such a description are called a model and this model dictates which values the signals can obtain as a function of time and space. In an industrial production setting, such mathematical models may be used to monitor the process or determine the control action required to reach a certain set-point. Since their evolution is over both space and time, multi-variable systems are described by Partial Differential Equations (PDEs). Generally, it is not the signals or systems themselves one is interested in, but the information they carry. The main numerical tools to extract system trajectories from the PDE description are Finite Element (FE) methods. FE models allow simulation of the model via a discretization scheme. The main problem with FE models is their complexity, which leads to large simulation time, making them not suitable for applications such as on-line monitoring of the process or model-based control design. Model reduction techniques aim to derive lowcomplexity replacement models from complex process models, in the setting of this work, from FE models. The approximations are achieved by projection on lower-dimensional subspaces of the signals and their dynamic laws. This work considers the computation of empirical projection spaces for signals and systems evolving over multi-dimensional domains. Formally, signal approximation may be viewed as a low-rank approximation problem. Whenever the signal under consideration is a function of multiple variables, low-rank approximations can be obtained via multi-linear functionals, tensors. It has been explained in this work that approximation of multi-variable systems also boils down to low-rank approximation problems.The first problem under consideration was that of finding low-rank approximations to tensors. For order-2 tensors, matrices, this problem is well understood. Generalization of these results to higher-order tensors is not straightforward. Finding tensor decompositions that allow suitable approximations after truncation is an active area of research. In this work a concept of rank for tensors, referred to as multi-linear or modal rank, has been considered. A new method has been defined to obtain modal rank decompositions to tensors, referred to as Tensor Singular Value Decomposition (TSVD). Properties of the TSVD that reflect its sparsity structure have been derived and low-rank approximation error bounds have been obtained for certain specific cases. An adaptation of the TSVD method has been proposed that may give better approximation results when not all modal directions are approximated. A numerical algorithm has been presented for the computation of the (dedicated) TSVD, which with a small adaptation can also be used to compute successive rank-one approximation to tensors. Finally, a simulation example has been included which demonstrates the methods proposed in this work and compares them to a well-known existing method. The concepts that were introduced and discussed with regard to signal approximation have been used in a system approximation context.We have considered the well-known model reduction method of Proper Orthogonal Decompositions (POD). We have shown how the basis functions inferred from the TSVD can be used to define projection spaces in POD. This adaptation is both a generalization and a restriction. It is a generalization because it allows POD to be used in a scalable fashion for problems with an arbitrary number of dependent and independent variables. However, it is also a restriction, since the projection spaces require a Cartesian product structure of the domain. The model reduction method that is thus obtained has been demonstrated on a benchmark example from chemical engineering. This application shows that the method is indeed feasible, and that the accuracy is comparable to existing methods for this example. In the final part of the thesis the problem of reconstruction and approximation of multi-dimensional signals was considered. Specifically, the problem of sampling and signal reconstruction for multi-variable signals with non-uniformly distributed sensors on a Cartesian domain has been considered. The central question of this chapter was that of finding a reconstruction of the original signal from its samples. A specific reconstruction map has been examined and conditions for exact reconstruction have been presented. In case that exact reconstruction was not possible, we have derived an expression for the reconstruction error

    Proper Generalized Decomposition for Nonlinear Convex Problems in Tensor Banach Spaces

    Full text link
    Tensor-based methods are receiving a growing interest in scientific computing for the numerical solution of problems defined in high dimensional tensor product spaces. A family of methods called Proper Generalized Decompositions methods have been recently introduced for the a priori construction of tensor approximations of the solution of such problems. In this paper, we give a mathematical analysis of a family of progressive and updated Proper Generalized Decompositions for a particular class of problems associated with the minimization of a convex functional over a reflexive tensor Banach space.Comment: Accepted in Numerische Mathemati

    Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format

    Full text link
    We apply the Tensor Train (TT) decomposition to construct the tensor product Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format, but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its post-processing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with significantly extended numerical experiments. Some unused material is remove

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    • …
    corecore