171 research outputs found

    Comparison of some Reduced Representation Approximations

    Full text link
    In the field of numerical approximation, specialists considering highly complex problems have recently proposed various ways to simplify their underlying problems. In this field, depending on the problem they were tackling and the community that are at work, different approaches have been developed with some success and have even gained some maturity, the applications can now be applied to information analysis or for numerical simulation of PDE's. At this point, a crossed analysis and effort for understanding the similarities and the differences between these approaches that found their starting points in different backgrounds is of interest. It is the purpose of this paper to contribute to this effort by comparing some constructive reduced representations of complex functions. We present here in full details the Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM) together with other approaches that enter in the same category

    Wavelet and Multiscale Methods

    Get PDF
    Various scientific models demand finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. Namely, the wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information leads to tasks that are not tractable by standard numerical techniques. The last decade has seen the emergence of several new computational methodologies to address this situation. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in multigrid methods and adaptive grid solvers for partial differential equations. These have substantially advanced the frontiers of computability for certain problem classes in numerical analysis. Other highly visible examples are: regression techniques in nonparametric statistical estimation, the design of universal estimators in the context of mathematical learning theory and machine learning; the investigation of greedy algorithms in complexity theory, compression techniques and encoding in signal and image processing; the solution of global operator equations through the compression of fully populated matrices arising from boundary integral equations with the aid of multipole expansions and hierarchical matrices; attacking problems in high spatial dimensions by sparse grid or hyperbolic wavelet concepts. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computation and to promote the exchange of ideas emerging in various disciplines

    Approximation of high-dimensional parametric PDEs

    Get PDF
    Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map a↦u(a)a\mapsto u(a) where aa is the parameter value and u(a)u(a) is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of nn-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best nn-term approximation, sparsity, and nn-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks

    A reduced basis approach for variational problems with stochastic parameters: Application to heat conduction with variable Robin coefficient

    Get PDF
    In this work, a Reduced Basis (RB) approach is used to solve a large number of boundary value problems parametrized by a stochastic input – expressed as a Karhunen–Loève expansion – in order to compute outputs that are smooth functionals of the random solution fields. The RB method proposed here for variational problems parametrized by stochastic coefficients bears many similarities to the RB approach developed previously for deterministic systems. However, the stochastic framework requires the development of new a posteriori estimates for “statistical” outputs – such as the first two moments of integrals of the random solution fields; these error bounds, in turn, permit efficient sampling of the input stochastic parameters and fast reliable computation of the outputs in particular in the many-query context.United States. Air Force Office of Scientific Research (Grant FA9550-07-1-0425)Singapore-MIT Alliance for Research and TechnologyChaire d’excellence AC
    • …
    corecore