10 research outputs found

    Nonintrusive approximation of parametrized limits of matrix power algorithms -- application to matrix inverses and log-determinants

    Full text link
    We consider in this work quantities that can be obtained as limits of powers of parametrized matrices, for instance the inverse matrix or the logarithm of the determinant. Under the assumption of affine dependence in the parameters, we use the Empirical Interpolation Method (EIM) to derive an approximation for powers of these matrices, from which we derive a nonintrusive approximation for the aforementioned limits. We derive upper bounds of the error made by the obtained formula. Finally, numerical comparisons with classical intrusive and nonintrusive approximation techniques are provided: in the considered test-cases, our algorithm performs well compared to the nonintrusive ones

    Dictionary-based model reduction for state estimation

    Full text link
    We consider the problem of state estimation from mm linear measurements, where the state uu to recover is an element of the manifold M\mathcal{M} of solutions of a parameter-dependent equation. The state is estimated using a prior knowledge on M\mathcal{M} coming from model order reduction. Variational approaches based on linear approximation of M\mathcal{M}, such as PBDW, yields a recovery error limited by the Kolmogorov mm-width of M\mathcal{M}. To overcome this issue, piecewise-affine approximations of M\mathcal{M} have also be considered, that consist in using a library of linear spaces among which one is selected by minimizing some distance to M\mathcal{M}. In this paper, we propose a state estimation method relying on dictionary-based model reduction, where a space is selected from a library generated by a dictionary of snapshots, using a distance to the manifold. The selection is performed among a set of candidate spaces obtained from the path of a â„“1\ell_1-regularized least-squares problem. Then, in the framework of parameter-dependent operator equations (or PDEs) with affine parameterizations, we provide an efficient offline-online decomposition based on randomized linear algebra, that ensures efficient and stable computations while preserving theoretical guarantees.Comment: 19 pages, 5 figure

    A probabilistic reduced basis method for parameter-dependent problems

    Full text link
    Probabilistic variants of Model Order Reduction (MOR) methods have recently emerged for improving stability and computational performance of classical approaches. In this paper, we propose a probabilistic Reduced Basis Method (RBM) for the approximation of a family of parameter-dependent functions. It relies on a probabilistic greedy algorithm with an error indicator that can be written as an expectation of some parameter-dependent random variable. Practical algorithms relying on Monte Carlo estimates of this error indicator are discussed. In particular, when using Probably Approximately Correct (PAC) bandit algorithm, the resulting procedure is proven to be a weak greedy algorithm with high probability. Intended applications concern the approximation of a parameter-dependent family of functions for which we only have access to (noisy) pointwise evaluations. As a particular application, we consider the approximation of solution manifolds of linear parameter-dependent partial differential equations with a probabilistic interpretation through the Feynman-Kac formula

    (Parametrized) First Order Transport Equations: Realization of Optimally Stable Petrov-Galerkin Methods

    Get PDF
    We consider ultraweak variational formulations for (parametrized) linear first order transport equations in time and/or space. Computationally feasible pairs of optimally stable trial and test spaces are presented, starting with a suitable test space and defining an optimal trial space by the application of the adjoint operator. As a result, the inf-sup constant is one in the continuous as well as in the discrete case and the computational realization is therefore easy. In particular, regarding the latter, we avoid a stabilization loop within the greedy algorithm when constructing reduced models within the framework of reduced basis methods. Several numerical experiments demonstrate the good performance of the new method

    Randomized Local Model Order Reduction

    Get PDF
    In this paper we propose local approximation spaces for localized model order reduction procedures such as domain decomposition and multiscale methods. Those spaces are constructed from local solutions of the partial differential equation (PDE) with random boundary conditions, yield an approximation that converges provably at a nearly optimal rate, and can be generated at close to optimal computational complexity. In many localized model order reduction approaches like the generalized finite element method, static condensation procedures, and the multiscale finite element method local approximation spaces can be constructed by approximating the range of a suitably defined transfer operator that acts on the space of local solutions of the PDE. Optimal local approximation spaces that yield in general an exponentially convergent approximation are given by the left singular vectors of this transfer operator [I. Babu\v{s}ka and R. Lipton 2011, K. Smetana and A. T. Patera 2016]. However, the direct calculation of these singular vectors is computationally very expensive. In this paper, we propose an adaptive randomized algorithm based on methods from randomized linear algebra [N. Halko et al. 2011], which constructs a local reduced space approximating the range of the transfer operator and thus the optimal local approximation spaces. The adaptive algorithm relies on a probabilistic a posteriori error estimator for which we prove that it is both efficient and reliable with high probability. Several numerical experiments confirm the theoretical findings.Comment: 31 pages, 14 figures, 1 table, 1 algorith

    Randomized residual-based error estimators for parametrized equations

    Get PDF
    International audienceWe propose a randomized a posteriori error estimator for reduced order approximations of parametrized (partial) differential equations. The error estimator has several important properties: the effectivity is close to unity with prescribed lower and upper bounds at specified high probability; the estimator does not require the calculation of stability (coercivity, or inf-sup) constants; the online cost to evaluate the a posteriori error estimator is commensurate with the cost to find the reduced order approximation; the probabilistic bounds extend to many queries with only modest increase in cost. To build this estimator, we first estimate the norm of the error with a Monte-Carlo estimator using Gaussian random vectors whose covariance is chosen according to the desired error measure, e.g. user-defined norms or quantity of interest. Then, we introduce a dual problem with random right-hand side the solution of which allows us to rewrite the error estimator in terms of the residual of the original equation. In order to have a fast-to-evaluate estimator, model order reduction methods can be used to approximate the random dual solutions. Here, we propose a greedy algorithm that is guided by a scalar quantity of interest depending on the error estimator. Numerical experiments on a multi-parametric Helmholtz problem demonstrate that this strategy yields rather low-dimensional reduced dual spaces

    Méthodes de réduction de modèle pour les équations paramétrées -- Applications à la quantification d’incertitude.

    Get PDF
    Model order reduction has become an inescapable tool for the solution of high dimensional parameter-dependent equations arising in uncertainty quantification, optimization or inverse problems. In this thesis we focus on low rank approximation methods, in particular on reduced basis methods and on tensor approximation methods.The approximation obtained by Galerkin projections may be inaccurate when the operator is ill-conditioned. For projection based methods, we propose preconditioners built by interpolation of the operator inverse. We rely on randomized linear algebra for the efficient computation of these preconditioners. Adaptive interpolation strategies are proposed in order to improve either the error estimates or the projection onto reduced spaces. For tensor approximation methods, we propose a minimal residual formulation with ideal residual norms. The proposed algorithm, which can be interpreted as a gradient algorithm with an implicit preconditioner, allows obtaining a quasi-optimal approximation of the solution.Finally, we address the problem of the approximation of vector-valued or functional-valued quantities of interest. For this purpose we generalize the 'primal-dual' approaches to the non-scalar case, and we propose new methods for the projection onto reduced spaces. In the context of tensor approximation we consider a norm which depends on the error on the quantity of interest. This allows obtaining approximations of the solution that take into account the objective of the numerical simulation.Les méthodes de réduction de modèle sont incontournables pour la résolution d'équations paramétrées de grande dimension qui apparaissent dans les problèmes de quantification d'incertitude, d'optimisation ou encore les problèmes inverses. Dans cette thèse nous nous intéressons aux méthodes d'approximation de faible rang, notamment aux méthodes de bases réduites et d'approximation de tenseur.L'approximation obtenue par projection de Galerkin peut être de mauvaise qualité lorsque l'opérateur est mal conditionné. Pour les méthodes de projection sur des espaces réduits, nous proposons des préconditionneurs construits par interpolation d'inverse d'opérateur, calculés efficacement par des outils d'algèbre linéaire "randomisée". Des stratégies d'interpolation adaptatives sont proposées pour améliorer soit les estimateurs d'erreur, soit les projections sur les espaces réduits. Pour les méthodes d'approximation de tenseur, nous proposons une formulation en minimum de résidu avec utilisation de norme idéale. L'algorithme de résolution, qui s'interprète comme un algorithme de gradient avec préconditionneur implicite, permet d'obtenir une approximation quasi-optimale de la solution.Enfin nous nous intéressons à l'approximation de quantités d'intérêt à valeur fonctionnelle ou vectorielle. Nous généralisons pour cela les approches de type "primale-duale" au cas non scalaire, et nous proposons de nouvelles méthodes de projection sur espaces réduits. Dans le cadre de l'approximation de tenseur, nous considérons une norme dépendant de l'erreur en quantité d'intérêt afin d'obtenir une approximation de la solution qui tient compte de l'objectif du calcul

    Multi space reduced basis preconditioners for parametrized partial differential equations

    Get PDF
    The multiquery solution of parametric partial differential equations (PDEs), that is, PDEs depending on a vector of parameters, is computationally challenging and appears in several engineering contexts, such as PDE-constrained optimization, uncertainty quantification or sensitivity analysis. When using the finite element (FE) method as approximation technique, an algebraic system must be solved for each instance of the parameter, leading to a critical bottleneck when we are in a multiquery context, a problem which is even more emphasized when dealing with nonlinear or time dependent PDEs. Several techniques have been proposed to deal with sequences of linear systems, such as truncated Krylov subspace recycling methods, deflated restarting techniques and approximate inverse preconditioners; however, these techniques do not satisfactorily exploit the parameter dependence. More recently, the reduced basis (RB) method, together with other reduced order modeling (ROM) techniques, emerged as an efficient tool to tackle parametrized PDEs. In this thesis, we investigate a novel preconditioning strategy for parametrized systems which arise from the FE discretization of parametrized PDEs. Our preconditioner combines multiplicatively a RB coarse component, which is built upon the RB method, and a nonsingular fine grid preconditioner. The proposed technique hinges upon the construction of a new Multi Space Reduced Basis (MSRB) method, where a RB solver is built at each step of the chosen iterative method and trained to accurately solve the error equation. The resulting preconditioner directly exploits the parameter dependence, since it is tailored to the class of problems at hand, and significantly speeds up the solution of the parametrized linear system. We analyze the proposed preconditioner from a theoretical standpoint, providing assumptions which lead to its well-posedness and efficiency. We apply our strategy to a broad range of problems described by parametrized PDEs: (i) elliptic problems such as advection-diffusion-reaction equations, (ii) evolution problems such as time-dependent advection-diffusion-reaction equations or linear elastodynamics equations (iii) saddle-point problems such as Stokes equations, and, finally, (iv) Navier-Stokes equations. Even though the structure of the preconditioner is similar for all these classes of problems, its fine and coarse components must be accurately chosen in order to provide the best possible results. Several comparisons are made with respect to the current state-of-the-art preconditioning and ROM techniques. Finally, we employ the proposed technique to speed up the solution of problems in the field of cardiovascular modeling
    corecore