2,209 research outputs found

    Greedy algorithms for high-dimensional eigenvalue problems

    Full text link
    In this article, we present two new greedy algorithms for the computation of the lowest eigenvalue (and an associated eigenvector) of a high-dimensional eigenvalue problem, and prove some convergence results for these algorithms and their orthogonalized versions. The performance of our algorithms is illustrated on numerical test cases (including the computation of the buckling modes of a microstructured plate), and compared with that of another greedy algorithm for eigenvalue problems introduced by Ammar and Chinesta.Comment: 33 pages, 5 figure

    Comparison of some Reduced Representation Approximations

    Full text link
    In the field of numerical approximation, specialists considering highly complex problems have recently proposed various ways to simplify their underlying problems. In this field, depending on the problem they were tackling and the community that are at work, different approaches have been developed with some success and have even gained some maturity, the applications can now be applied to information analysis or for numerical simulation of PDE's. At this point, a crossed analysis and effort for understanding the similarities and the differences between these approaches that found their starting points in different backgrounds is of interest. It is the purpose of this paper to contribute to this effort by comparing some constructive reduced representations of complex functions. We present here in full details the Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM) together with other approaches that enter in the same category

    A tensor approximation method based on ideal minimal residual formulations for the solution of high-dimensional problems

    Full text link
    In this paper, we propose a method for the approximation of the solution of high-dimensional weakly coercive problems formulated in tensor spaces using low-rank approximation formats. The method can be seen as a perturbation of a minimal residual method with residual norm corresponding to the error in a specified solution norm. We introduce and analyze an iterative algorithm that is able to provide a controlled approximation of the optimal approximation of the solution in a given low-rank subset, without any a priori information on this solution. We also introduce a weak greedy algorithm which uses this perturbed minimal residual method for the computation of successive greedy corrections in small tensor subsets. We prove its convergence under some conditions on the parameters of the algorithm. The residual norm can be designed such that the resulting low-rank approximations are quasi-optimal with respect to particular norms of interest, thus yielding to goal-oriented order reduction strategies for the approximation of high-dimensional problems. The proposed numerical method is applied to the solution of a stochastic partial differential equation which is discretized using standard Galerkin methods in tensor product spaces

    Finding a low-rank basis in a matrix subspace

    Full text link
    For a given matrix subspace, how can we find a basis that consists of low-rank matrices? This is a generalization of the sparse vector problem. It turns out that when the subspace is spanned by rank-1 matrices, the matrices can be obtained by the tensor CP decomposition. For the higher rank case, the situation is not as straightforward. In this work we present an algorithm based on a greedy process applicable to higher rank problems. Our algorithm first estimates the minimum rank by applying soft singular value thresholding to a nuclear norm relaxation, and then computes a matrix with that rank using the method of alternating projections. We provide local convergence results, and compare our algorithm with several alternative approaches. Applications include data compression beyond the classical truncated SVD, computing accurate eigenvectors of a near-multiple eigenvalue, image separation and graph Laplacian eigenproblems
    • …
    corecore