37,548 research outputs found

    Approximation of multi-variable signals and systems : a tensor decomposition approach

    Get PDF
    Signals that evolve over multiple variables or indices occur in all fields of science and engineering. Measurements of the distribution of temperature across the globe during a certain period of time are an example of such a signal. Multivariable systems describe the evolution of signals over a spatial-temporal domain. The mathematical equations involved in such a description are called a model and this model dictates which values the signals can obtain as a function of time and space. In an industrial production setting, such mathematical models may be used to monitor the process or determine the control action required to reach a certain set-point. Since their evolution is over both space and time, multi-variable systems are described by Partial Differential Equations (PDEs). Generally, it is not the signals or systems themselves one is interested in, but the information they carry. The main numerical tools to extract system trajectories from the PDE description are Finite Element (FE) methods. FE models allow simulation of the model via a discretization scheme. The main problem with FE models is their complexity, which leads to large simulation time, making them not suitable for applications such as on-line monitoring of the process or model-based control design. Model reduction techniques aim to derive lowcomplexity replacement models from complex process models, in the setting of this work, from FE models. The approximations are achieved by projection on lower-dimensional subspaces of the signals and their dynamic laws. This work considers the computation of empirical projection spaces for signals and systems evolving over multi-dimensional domains. Formally, signal approximation may be viewed as a low-rank approximation problem. Whenever the signal under consideration is a function of multiple variables, low-rank approximations can be obtained via multi-linear functionals, tensors. It has been explained in this work that approximation of multi-variable systems also boils down to low-rank approximation problems.The first problem under consideration was that of finding low-rank approximations to tensors. For order-2 tensors, matrices, this problem is well understood. Generalization of these results to higher-order tensors is not straightforward. Finding tensor decompositions that allow suitable approximations after truncation is an active area of research. In this work a concept of rank for tensors, referred to as multi-linear or modal rank, has been considered. A new method has been defined to obtain modal rank decompositions to tensors, referred to as Tensor Singular Value Decomposition (TSVD). Properties of the TSVD that reflect its sparsity structure have been derived and low-rank approximation error bounds have been obtained for certain specific cases. An adaptation of the TSVD method has been proposed that may give better approximation results when not all modal directions are approximated. A numerical algorithm has been presented for the computation of the (dedicated) TSVD, which with a small adaptation can also be used to compute successive rank-one approximation to tensors. Finally, a simulation example has been included which demonstrates the methods proposed in this work and compares them to a well-known existing method. The concepts that were introduced and discussed with regard to signal approximation have been used in a system approximation context.We have considered the well-known model reduction method of Proper Orthogonal Decompositions (POD). We have shown how the basis functions inferred from the TSVD can be used to define projection spaces in POD. This adaptation is both a generalization and a restriction. It is a generalization because it allows POD to be used in a scalable fashion for problems with an arbitrary number of dependent and independent variables. However, it is also a restriction, since the projection spaces require a Cartesian product structure of the domain. The model reduction method that is thus obtained has been demonstrated on a benchmark example from chemical engineering. This application shows that the method is indeed feasible, and that the accuracy is comparable to existing methods for this example. In the final part of the thesis the problem of reconstruction and approximation of multi-dimensional signals was considered. Specifically, the problem of sampling and signal reconstruction for multi-variable signals with non-uniformly distributed sensors on a Cartesian domain has been considered. The central question of this chapter was that of finding a reconstruction of the original signal from its samples. A specific reconstruction map has been examined and conditions for exact reconstruction have been presented. In case that exact reconstruction was not possible, we have derived an expression for the reconstruction error

    A continuous analogue of the tensor-train decomposition

    Full text link
    We develop new approximation algorithms and data structures for representing and computing with multivariate functions using the functional tensor-train (FT), a continuous extension of the tensor-train (TT) decomposition. The FT represents functions using a tensor-train ansatz by replacing the three-dimensional TT cores with univariate matrix-valued functions. The main contribution of this paper is a framework to compute the FT that employs adaptive approximations of univariate fibers, and that is not tied to any tensorized discretization. The algorithm can be coupled with any univariate linear or nonlinear approximation procedure. We demonstrate that this approach can generate multivariate function approximations that are several orders of magnitude more accurate, for the same cost, than those based on the conventional approach of compressing the coefficient tensor of a tensor-product basis. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. To support these developments, we describe continuous versions of an approximate maximum-volume cross approximation algorithm and of a rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate that our technique improves accuracy and robustness, compared to TT and quantics-TT approaches with fixed parameterizations, of high-dimensional integration, differentiation, and approximation of functions with local features such as discontinuities and other nonlinearities

    Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families

    Get PDF
    We propose Kernel Hamiltonian Monte Carlo (KMC), a gradient-free adaptive MCMC algorithm based on Hamiltonian Monte Carlo (HMC). On target densities where classical HMC is not an option due to intractable gradients, KMC adaptively learns the target's gradient structure by fitting an exponential family model in a Reproducing Kernel Hilbert Space. Computational costs are reduced by two novel efficient approximations to this gradient. While being asymptotically exact, KMC mimics HMC in terms of sampling efficiency, and offers substantial mixing improvements over state-of-the-art gradient free samplers. We support our claims with experimental studies on both toy and real-world applications, including Approximate Bayesian Computation and exact-approximate MCMC.Comment: 20 pages, 7 figure
    • …
    corecore