373 research outputs found
Greedy expansions with prescribed coefficients in Hilbert spaces for special classes of dictionaries
Greedy expansions with prescribed coefficients have been introduced by V. N.
Temlyakov in the frame of Banach spaces. The idea is to choose a sequence of
fixed (real) coefficients and a fixed set of elements
(dictionary) of the Banach space; then, under suitable conditions on the
coefficients and the dictionary, it is possible to expand all the elements of
the Banach space in series that contain only the fixed coefficients and the
elements of the dictionary. In Hilbert spaces the convergence of greedy
algorithm with prescribed coefficients is characterized, in the sense that
there are necessary and sufficient conditions on the coefficients in order that
the algorithm is convergent for all the dictionaries. This paper is concerned
with the question if such conditions can be weakened for particular
dictionaries; we prove that this is the case for some classes of dictionaries
related to orthonormal sequences
Approximation of high-dimensional parametric PDEs
Parametrized families of PDEs arise in various contexts such as inverse
problems, control and optimization, risk assessment, and uncertainty
quantification. In most of these applications, the number of parameters is
large or perhaps even infinite. Thus, the development of numerical methods for
these parametric problems is faced with the possible curse of dimensionality.
This article is directed at (i) identifying and understanding which properties
of parametric equations allow one to avoid this curse and (ii) developing and
analyzing effective numerical methodd which fully exploit these properties and,
in turn, are immune to the growth in dimensionality. The first part of this
article studies the smoothness and approximability of the solution map, that
is, the map where is the parameter value and is the
corresponding solution to the PDE. It is shown that for many relevant
parametric PDEs, the parametric smoothness of this map is typically holomorphic
and also highly anisotropic in that the relevant parameters are of widely
varying importance in describing the solution. These two properties are then
exploited to establish convergence rates of -term approximations to the
solution map for which each term is separable in the parametric and physical
variables. These results reveal that, at least on a theoretical level, the
solution map can be well approximated by discretizations of moderate
complexity, thereby showing how the curse of dimensionality is broken. This
theoretical analysis is carried out through concepts of approximation theory
such as best -term approximation, sparsity, and -widths. These notions
determine a priori the best possible performance of numerical methods and thus
serve as a benchmark for concrete algorithms. The second part of this article
turns to the development of numerical algorithms based on the theoretically
established sparse separable approximations. The numerical methods studied fall
into two general categories. The first uses polynomial expansions in terms of
the parameters to approximate the solution map. The second one searches for
suitable low dimensional spaces for simultaneously approximating all members of
the parametric family. The numerical implementation of these approaches is
carried out through adaptive and greedy algorithms. An a priori analysis of the
performance of these algorithms establishes how well they meet the theoretical
benchmarks
Weighted Thresholding and Nonlinear Approximation
We present a new method for performing nonlinear approximation with redundant
dictionaries. The method constructs an term approximation of the signal by
thresholding with respect to a weighted version of its canonical expansion
coefficients, thereby accounting for dependency between the coefficients. The
main result is an associated strong Jackson embedding, which provides an upper
bound on the corresponding reconstruction error. To complement the theoretical
results, we compare the proposed method to the pure greedy method and the
Windowed-Group Lasso by denoising music signals with elements from a Gabor
dictionary.Comment: 22 pages, 3 figure
A mixed regularization approach for sparse simultaneous approximation of parameterized PDEs
We present and analyze a novel sparse polynomial technique for the
simultaneous approximation of parameterized partial differential equations
(PDEs) with deterministic and stochastic inputs. Our approach treats the
numerical solution as a jointly sparse reconstruction problem through the
reformulation of the standard basis pursuit denoising, where the set of jointly
sparse vectors is infinite. To achieve global reconstruction of sparse
solutions to parameterized elliptic PDEs over both physical and parametric
domains, we combine the standard measurement scheme developed for compressed
sensing in the context of bounded orthonormal systems with a novel mixed-norm
based regularization method that exploits both energy and sparsity. In
addition, we are able to prove that, with minimal sample complexity, error
estimates comparable to the best -term and quasi-optimal approximations are
achievable, while requiring only a priori bounds on polynomial truncation error
with respect to the energy norm. Finally, we perform extensive numerical
experiments on several high-dimensional parameterized elliptic PDE models to
demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure
Nonlinear Methods for Model Reduction
The usual approach to model reduction for parametric partial differential
equations (PDEs) is to construct a linear space which approximates well
the solution manifold consisting of all solutions with
the vector of parameters. This linear reduced model is then used for
various tasks such as building an online forward solver for the PDE or
estimating parameters from data observations. It is well understood in other
problems of numerical computation that nonlinear methods such as adaptive
approximation, -term approximation, and certain tree-based methods may
provide improved numerical efficiency. For model reduction, a nonlinear method
would replace the linear space by a nonlinear space . This idea
has already been suggested in recent papers on model reduction where the
parameter domain is decomposed into a finite number of cells and a linear space
of low dimension is assigned to each cell.
Up to this point, little is known in terms of performance guarantees for such
a nonlinear strategy. Moreover, most numerical experiments for nonlinear model
reduction use a parameter dimension of only one or two. In this work, a step is
made towards a more cohesive theory for nonlinear model reduction. Framing
these methods in the general setting of library approximation allows us to give
a first comparison of their performance with those of standard linear
approximation for any general compact set. We then turn to the study these
methods for solution manifolds of parametrized elliptic PDEs. We study a very
specific example of library approximation where the parameter domain is split
into a finite number of rectangular cells and where different reduced
affine spaces of dimension are assigned to each cell. The performance of
this nonlinear procedure is analyzed from the viewpoint of accuracy of
approximation versus and
- …