85 research outputs found

    Efficient Reduction Techniques for the Simulation and Optimization of Parametrized Systems:Analysis and Applications

    Get PDF
    This thesis is concerned with the development, analysis and implementation of efficient reduced order models (ROMs) for the simulation and optimization of parametrized partial differential equations (PDEs). Indeed, since the high-fidelity approximation of many complex models easily leads to solve large-scale problems, the need to perform multiple simulations to explore different scenarios, as well as to achieve rapid responses, often requires unaffordable computational resources. Alleviating this extreme computational effort represents the main motivation for developing ROMs, i.e. low-dimensional approximations of the underlying high-fidelity problem. Among a wide range of model order reduction approaches, here we focus on the so-called projection-based methods, in particular Galerkin and Petrov-Galerkin reduced basis methods. In this context, the goal is to generate low cost and fast, but still sufficiently accurate ROMs which characterize the system response for the whole range of input parameters we are interested in. In particular, several challenges have to be faced to ensure reliability and computational efficiency. As regards the former, this thesis presents some heuristic approaches to approximate the stability factor of parameterized nonlinear PDEs, a key ingredient of any a posteriori error estimate. Concerning computational efficiency, we propose different strategies to combine the `Matrix Discrete Empirical Interpolation Method' (MDEIM) with a state approximation resulting either from a proper orthogonal decomposition or a greedy approach. Specifically, we exploit the MDEIM to develop fast and efficient ROMs for nonaffinely parametrized elliptic and parabolic PDEs, as well as for the time-dependent Navier-Stokes equations. The efficacy of the proposed methods is demonstrated on a variety of computationally-intensive applications, such as the shape optimization of an acoustic device, the simulation of blood flow in cerebral aneurysms and the simulation of solute dynamics in blood flow and arterial walls. %and coupled blood flow and mass transport in human arteries. Furthermore, the above-mentioned techniques have been exploited to develop a model order reduction framework for parametrized optimization problems constrained by either linear or nonlinear stationary PDEs. In particular, among this wide class of problems, here we focus on those featuring high-dimensional control variables. To cope with this high dimensionality and complexity, we propose an all-at-once optimize-then-reduce paradigm, where a simultaneous state and control reduction is performed. This methodology is applied first to a data reconstruction problem arising in hemodynamics, and then to several optimal flow control problems

    A robust error estimator and a residual-free error indicator for reduced basis methods

    Full text link
    The Reduced Basis Method (RBM) is a rigorous model reduction approach for solving parametrized partial differential equations. It identifies a low-dimensional subspace for approximation of the parametric solution manifold that is embedded in high-dimensional space. A reduced order model is subsequently constructed in this subspace. RBM relies on residual-based error indicators or {\em a posteriori} error bounds to guide construction of the reduced solution subspace, to serve as a stopping criteria, and to certify the resulting surrogate solutions. Unfortunately, it is well-known that the standard algorithm for residual norm computation suffers from premature stagnation at the level of the square root of machine precision. In this paper, we develop two alternatives to the standard offline phase of reduced basis algorithms. First, we design a robust strategy for computation of residual error indicators that allows RBM algorithms to enrich the solution subspace with accuracy beyond root machine precision. Secondly, we propose a new error indicator based on the Lebesgue function in interpolation theory. This error indicator does not require computation of residual norms, and instead only requires the ability to compute the RBM solution. This residual-free indicator is rigorous in that it bounds the error committed by the RBM approximation, but up to an uncomputable multiplicative constant. Because of this, the residual-free indicator is effective in choosing snapshots during the offline RBM phase, but cannot currently be used to certify error that the approximation commits. However, it circumvents the need for \textit{a posteriori} analysis of numerical methods, and therefore can be effective on problems where such a rigorous estimate is hard to derive

    Adaptive greedy algorithms based on parameter-domain decomposition and reconstruction for the reduced basis method

    Full text link
    The reduced basis method (RBM) empowers repeated and rapid evaluation of parametrized partial differential equations through an offline-online decomposition, a.k.a. a learning-execution process. A key feature of the method is a greedy algorithm repeatedly scanning the training set, a fine discretization of the parameter domain, to identify the next dimension of the parameter-induced solution manifold along which we expand the surrogate solution space. Although successfully applied to problems with fairly high parametric dimensions, the challenge is that this scanning cost dominates the offline cost due to it being proportional to the cardinality of the training set which is exponential with respect to the parameter dimension. In this work, we review three recent attempts in effectively delaying this curse of dimensionality, and propose two new hybrid strategies through successive refinement and multilevel maximization of the error estimate over the training set. All five offline-enhanced methods and the original greedy algorithm are tested and compared on {two types of problems: the thermal block problem and the geometrically parameterized Helmholtz problem

    Model Order Reduction for Parameterized Nonlinear Evolution Equations

    No full text

    Model Order Reduction in Fluid Dynamics: Challenges and Perspectives

    Get PDF
    This chapter reviews techniques of model reduction of fluid dynamics systems. Fluid systems are known to be difficult to reduce efficiently due to several reasons. First of all, they exhibit strong nonlinearities — which are mainly related either to nonlinear convection terms and/or some geometric variability — that often cannot be treated by simple linearization. Additional difficulties arise when attempting model reduction of unsteady flows, especially when long-term transient behavior needs to be accurately predicted using reduced order models and more complex features, such as turbulence or multiphysics phenomena, have to be taken into consideration. We first discuss some general principles that apply to many parametric model order reduction problems, then we apply them on steady and unsteady viscous flows modelled by the incompressible Navier-Stokes equations. We address questions of inf-sup stability, certification through error estimation, computational issues and — in the unsteady case — long-time stability of the reduced model. Moreover, we provide an extensive list of literature references

    Multi space reduced basis preconditioners for parametrized partial differential equations

    Get PDF
    The multiquery solution of parametric partial differential equations (PDEs), that is, PDEs depending on a vector of parameters, is computationally challenging and appears in several engineering contexts, such as PDE-constrained optimization, uncertainty quantification or sensitivity analysis. When using the finite element (FE) method as approximation technique, an algebraic system must be solved for each instance of the parameter, leading to a critical bottleneck when we are in a multiquery context, a problem which is even more emphasized when dealing with nonlinear or time dependent PDEs. Several techniques have been proposed to deal with sequences of linear systems, such as truncated Krylov subspace recycling methods, deflated restarting techniques and approximate inverse preconditioners; however, these techniques do not satisfactorily exploit the parameter dependence. More recently, the reduced basis (RB) method, together with other reduced order modeling (ROM) techniques, emerged as an efficient tool to tackle parametrized PDEs. In this thesis, we investigate a novel preconditioning strategy for parametrized systems which arise from the FE discretization of parametrized PDEs. Our preconditioner combines multiplicatively a RB coarse component, which is built upon the RB method, and a nonsingular fine grid preconditioner. The proposed technique hinges upon the construction of a new Multi Space Reduced Basis (MSRB) method, where a RB solver is built at each step of the chosen iterative method and trained to accurately solve the error equation. The resulting preconditioner directly exploits the parameter dependence, since it is tailored to the class of problems at hand, and significantly speeds up the solution of the parametrized linear system. We analyze the proposed preconditioner from a theoretical standpoint, providing assumptions which lead to its well-posedness and efficiency. We apply our strategy to a broad range of problems described by parametrized PDEs: (i) elliptic problems such as advection-diffusion-reaction equations, (ii) evolution problems such as time-dependent advection-diffusion-reaction equations or linear elastodynamics equations (iii) saddle-point problems such as Stokes equations, and, finally, (iv) Navier-Stokes equations. Even though the structure of the preconditioner is similar for all these classes of problems, its fine and coarse components must be accurately chosen in order to provide the best possible results. Several comparisons are made with respect to the current state-of-the-art preconditioning and ROM techniques. Finally, we employ the proposed technique to speed up the solution of problems in the field of cardiovascular modeling

    Optimization and Applications

    Get PDF
    Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research

    Snapshot-Based Methods and Algorithms

    Get PDF
    An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This two-volume handbook covers methods as well as applications. This second volume focuses on applications in engineering, biomedical engineering, computational physics and computer science

    Kernel Methods for Surrogate Modeling

    Full text link
    This chapter deals with kernel methods as a special class of techniques for surrogate modeling. Kernel methods have proven to be efficient in machine learning, pattern recognition and signal analysis due to their flexibility, excellent experimental performance and elegant functional analytic background. These data-based techniques provide so called kernel expansions, i.e., linear combinations of kernel functions which are generated from given input-output point samples that may be arbitrarily scattered. In particular, these techniques are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems. In contrast to projection-based model reduction, we do not necessarily assume a high-dimensional model, but a general function that models input-output behavior within some simulation context. This could be some micro-model in a multiscale-simulation, some submodel in a coupled system, some initialization function for solvers, coefficient function in PDEs, etc. First, kernel surrogates can be useful if the input-output function is expensive to evaluate, e.g. is a result of a finite element simulation. Here, acceleration can be obtained by sparse kernel expansions. Second, if a function is available only via measurements or a few function evaluation samples, kernel approximation techniques can provide function surrogates that allow global evaluation. We present some important kernel approximation techniques, which are kernel interpolation, greedy kernel approximation and support vector regression. Pseudo-code is provided for ease of reproducibility. In order to illustrate the main features, commonalities and differences, we compare these techniques on a real-world application. The experiments clearly indicate the enormous acceleration potentia
    • …
    corecore