141,868 research outputs found

    Approximation of multi-variable signals and systems : a tensor decomposition approach

    Get PDF
    Signals that evolve over multiple variables or indices occur in all fields of science and engineering. Measurements of the distribution of temperature across the globe during a certain period of time are an example of such a signal. Multivariable systems describe the evolution of signals over a spatial-temporal domain. The mathematical equations involved in such a description are called a model and this model dictates which values the signals can obtain as a function of time and space. In an industrial production setting, such mathematical models may be used to monitor the process or determine the control action required to reach a certain set-point. Since their evolution is over both space and time, multi-variable systems are described by Partial Differential Equations (PDEs). Generally, it is not the signals or systems themselves one is interested in, but the information they carry. The main numerical tools to extract system trajectories from the PDE description are Finite Element (FE) methods. FE models allow simulation of the model via a discretization scheme. The main problem with FE models is their complexity, which leads to large simulation time, making them not suitable for applications such as on-line monitoring of the process or model-based control design. Model reduction techniques aim to derive lowcomplexity replacement models from complex process models, in the setting of this work, from FE models. The approximations are achieved by projection on lower-dimensional subspaces of the signals and their dynamic laws. This work considers the computation of empirical projection spaces for signals and systems evolving over multi-dimensional domains. Formally, signal approximation may be viewed as a low-rank approximation problem. Whenever the signal under consideration is a function of multiple variables, low-rank approximations can be obtained via multi-linear functionals, tensors. It has been explained in this work that approximation of multi-variable systems also boils down to low-rank approximation problems.The first problem under consideration was that of finding low-rank approximations to tensors. For order-2 tensors, matrices, this problem is well understood. Generalization of these results to higher-order tensors is not straightforward. Finding tensor decompositions that allow suitable approximations after truncation is an active area of research. In this work a concept of rank for tensors, referred to as multi-linear or modal rank, has been considered. A new method has been defined to obtain modal rank decompositions to tensors, referred to as Tensor Singular Value Decomposition (TSVD). Properties of the TSVD that reflect its sparsity structure have been derived and low-rank approximation error bounds have been obtained for certain specific cases. An adaptation of the TSVD method has been proposed that may give better approximation results when not all modal directions are approximated. A numerical algorithm has been presented for the computation of the (dedicated) TSVD, which with a small adaptation can also be used to compute successive rank-one approximation to tensors. Finally, a simulation example has been included which demonstrates the methods proposed in this work and compares them to a well-known existing method. The concepts that were introduced and discussed with regard to signal approximation have been used in a system approximation context.We have considered the well-known model reduction method of Proper Orthogonal Decompositions (POD). We have shown how the basis functions inferred from the TSVD can be used to define projection spaces in POD. This adaptation is both a generalization and a restriction. It is a generalization because it allows POD to be used in a scalable fashion for problems with an arbitrary number of dependent and independent variables. However, it is also a restriction, since the projection spaces require a Cartesian product structure of the domain. The model reduction method that is thus obtained has been demonstrated on a benchmark example from chemical engineering. This application shows that the method is indeed feasible, and that the accuracy is comparable to existing methods for this example. In the final part of the thesis the problem of reconstruction and approximation of multi-dimensional signals was considered. Specifically, the problem of sampling and signal reconstruction for multi-variable signals with non-uniformly distributed sensors on a Cartesian domain has been considered. The central question of this chapter was that of finding a reconstruction of the original signal from its samples. A specific reconstruction map has been examined and conditions for exact reconstruction have been presented. In case that exact reconstruction was not possible, we have derived an expression for the reconstruction error

    Low-Rank Matrices on Graphs: Generalized Recovery & Applications

    Get PDF
    Many real world datasets subsume a linear or non-linear low-rank structure in a very low-dimensional space. Unfortunately, one often has very little or no information about the geometry of the space, resulting in a highly under-determined recovery problem. Under certain circumstances, state-of-the-art algorithms provide an exact recovery for linear low-rank structures but at the expense of highly inscalable algorithms which use nuclear norm. However, the case of non-linear structures remains unresolved. We revisit the problem of low-rank recovery from a totally different perspective, involving graphs which encode pairwise similarity between the data samples and features. Surprisingly, our analysis confirms that it is possible to recover many approximate linear and non-linear low-rank structures with recovery guarantees with a set of highly scalable and efficient algorithms. We call such data matrices as \textit{Low-Rank matrices on graphs} and show that many real world datasets satisfy this assumption approximately due to underlying stationarity. Our detailed theoretical and experimental analysis unveils the power of the simple, yet very novel recovery framework \textit{Fast Robust PCA on Graphs

    OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage

    Full text link
    The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the_representation_ problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable)_denoising_ problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case. Our analysis brings into sharp focus the shrinkage-and-thresholding form of the optimal weights, the non-convex nature of the associated shrinkage function (on the singular values) and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm (OptShrink) that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries.Comment: Published version. The algorithm can be downloaded from http://www.eecs.umich.edu/~rajnrao/optshrin
    • …
    corecore