27 research outputs found

    Kolmogorov widths under holomorphic mappings

    Full text link
    If LL is a bounded linear operator mapping the Banach space XX into the Banach space YY and KK is a compact set in XX, then the Kolmogorov widths of the image L(K)L(K) do not exceed those of KK multiplied by the norm of LL. We extend this result from linear maps to holomorphic mappings uu from XX to YY in the following sense: when the nn widths of KK are O(nr)O(n^{-r}) for some r\textgreater{}1, then those of u(K)u(K) are O(ns)O(n^{-s}) for any s \textless{} r-1, We then use these results to prove various theorems about Kolmogorov widths of manifolds consisting of solutions to certain parametrized PDEs. Results of this type are important in the numerical analysis of reduced bases and other reduced modeling methods, since the best possible performance of such methods is governed by the rate of decay of the Kolmogorov widths of the solution manifold

    Lipschitz dependence of the coefficients on the resolvent and greedy approximation for scalar elliptic problems

    Get PDF
    We analyze the inverse problem of identifying the diffusivity coefficient of a scalar elliptic equation as a function of the resolvent operator. We prove that, within the class of measurable coefficients, bounded above and below by positive constants, the resolvent determines the diffusivity in an unique manner. Furthermore we prove that the inverse mapping from resolvent to the coefficient is Lipschitz in suitable topologies. This result plays a key role when applying greedy algorithms to the approximation of parameter-dependent elliptic problems in an uniform and robust manner, independent of the given source terms. In one space dimension the results can be improved using the explicit expression of solutions, which allows to link distances between one resolvent and a linear combination of finitely many others and the corresponding distances on coefficients. These results are also extended to multi-dimensional elliptic equations with variable density coefficients. We also point out towards some possible extensions and open problems

    Recursive POD expansion for the advection-diffusion-reaction equation

    Get PDF
    This paper deals with the approximation of advection-diffusion-reaction equation solution by reduced order methods. We use the Recursive POD approximation for multivariate functions introduced in [M. AZAÏEZ, F. BEN BELGACEM, T. CHACÓN REBOLLO, Recursive POD expansion for reactiondiffusion equation, Adv.Model. and Simul. in Eng. Sci. (2016) 3:3. DOI 10.1186/s40323-016-0060-1] and applied to the low tensor representation of the solution of the reaction-diffusion partial differential equation. In this contribution we extend the Recursive POD approximation for multivariate functions with an arbitrary number of parameters, for which we prove general error estimates. The method is used to approximate the solutions of the advection-diffusion-reaction equation. We prove spectral error estimates, in which the spectral convergence rate depends only on the diffusion interval, while the error estimates are affected by a factor that grows exponentially with the advection velocity, and are independent of the reaction rate if this lives in a bounded set. These error estimates are based upon the analyticity of the solution of these equations as a function of the parameters (advection velocity, diffusion, reaction rate). We present several numerical tests, strongly consistent with the theoretical error estimates.Ministerio de Economía y CompetitividadAgence nationale de la rechercheGruppo Nazionale per il Calcolo ScientificoUE ERA-PLANE

    Recursive POD expansion for the advection-diffusion-reaction equation

    Get PDF
    This paper deals with the approximation of advection-diffusion-reaction equation solution by reduced order methods. We use the Recursive POD approximation for multivariate functions introduced in [M. AZAÏEZ, F. BEN BELGACEM, T. CHACÓN REBOLLO, Recursive POD expansion for reactiondiffusion equation, Adv.Model. and Simul. in Eng. Sci. (2016) 3:3. DOI 10.1186/s40323-016-0060-1] and applied to the low tensor representation of the solution of the reaction-diffusion partial differential equation. In this contribution we extend the Recursive POD approximation for multivariate functions with an arbitrary number of parameters, for which we prove general error estimates. The method is used to approximate the solutions of the advection-diffusion-reaction equation. We prove spectral error estimates, in which the spectral convergence rate depends only on the diffusion interval, while the error estimates are affected by a factor that grows exponentially with the advection velocity, and are independent of the reaction rate if this lives in a bounded set. These error estimates are based upon the analyticity of the solution of these equations as a function of the parameters (advection velocity, diffusion, reaction rate). We present several numerical tests, strongly consistent with the theoretical error estimates.Ministerio de Economía y CompetitividadAgence nationale de la rechercheGruppo Nazionale per il Calcolo ScientificoUE ERA-PLANE

    Kolmogorov widths and low-rank approximations of parametric elliptic PDEs

    Get PDF
    Kolmogorov nn-widths and low-rank approximations are studied for families of elliptic diffusion PDEs parametrized by the diffusion coefficients. The decay of the nn-widths can be controlled by that of the error achieved by best nn-term approximations using polynomials in the parametric variable. However, we prove that in certain relevant instances where the diffusion coefficients are piecewise constant over a partition of the physical domain, the nn-widths exhibit significantly faster decay. This, in turn, yields a theoretical justification of the fast convergence of reduced basis or POD methods when treating such parametric PDEs. Our results are confirmed by numerical experiments, which also reveal the influence of the partition geometry on the decay of the nn-widths.Comment: 27 pages, 6 figure

    Stochastic optimization methods for the simultaneous control of parameter-dependent systems

    Full text link
    We address the application of stochastic optimization methods for the simultaneous control of parameter-dependent systems. In particular, we focus on the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro, and on the recently developed Continuous Stochastic Gradient (CSG) algorithm. We consider the problem of computing simultaneous controls through the minimization of a cost functional defined as the superposition of individual costs for each realization of the system. We compare the performances of these stochastic approaches, in terms of their computational complexity, with those of the more classical Gradient Descent (GD) and Conjugate Gradient (CG) algorithms, and we discuss the advantages and disadvantages of each methodology. In agreement with well-established results in the machine learning context, we show how the SGD and CSG algorithms can significantly reduce the computational burden when treating control problems depending on a large amount of parameters. This is corroborated by numerical experiments

    Coupling parameter and particle dynamics for adaptive sampling in Neural Galerkin schemes

    Full text link
    Training nonlinear parametrizations such as deep neural networks to numerically approximate solutions of partial differential equations is often based on minimizing a loss that includes the residual, which is analytically available in limited settings only. At the same time, empirically estimating the training loss is challenging because residuals and related quantities can have high variance, especially for transport-dominated and high-dimensional problems that exhibit local features such as waves and coherent structures. Thus, estimators based on data samples from un-informed, uniform distributions are inefficient. This work introduces Neural Galerkin schemes that estimate the training loss with data from adaptive distributions, which are empirically represented via ensembles of particles. The ensembles are actively adapted by evolving the particles with dynamics coupled to the nonlinear parametrizations of the solution fields so that the ensembles remain informative for estimating the training loss. Numerical experiments indicate that few dynamic particles are sufficient for obtaining accurate empirical estimates of the training loss, even for problems with local features and with high-dimensional spatial domains
    corecore