24 research outputs found
The Construction of Nonseparable Wavelet Bi-Frames and Associated Approximation Schemes
Wavelet analysis and its fast algorithms are widely used in many fields of applied mathematics such as in signal and image processing. In the present thesis, we circumvent the restrictions of orthogonal and biorthogonal wavelet bases by constructing wavelet frames. They still allow for a stable decomposition, and so-called wavelet bi-frames provide a series expansion very similar to those of pairs of biorthogonal wavelet bases. Contrary to biorthogonal bases, primal and dual wavelets are no longer supposed to satisfy any geometrical conditions, and the frame setting allows for redundancy. This provides more flexibility in their construction. Finally, we construct families of optimal wavelet bi-frames in arbitrary dimensions with arbitrarily high smoothness. Then we verify that the n-term approximation can be described by Besov spaces and we apply the theoretical findings to image denoising
Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis
We show how the discovery of robust scalable numerical solvers for arbitrary
bounded linear operators can be automated as a Game Theory problem by
reformulating the process of computing with partial information and limited
resources as that of playing underlying hierarchies of adversarial information
games. When the solution space is a Banach space endowed with a quadratic
norm , the optimal measure (mixed strategy) for such games (e.g. the
adversarial recovery of , given partial measurements with
, using relative error in -norm as a loss) is a
centered Gaussian field solely determined by the norm , whose
conditioning (on measurements) produces optimal bets. When measurements are
hierarchical, the process of conditioning this Gaussian field produces a
hierarchy of elementary bets (gamblets). These gamblets generalize the notion
of Wavelets and Wannier functions in the sense that they are adapted to the
norm and induce a multi-resolution decomposition of that is
adapted to the eigensubspaces of the operator defining the norm .
When the operator is localized, we show that the resulting gamblets are
localized both in space and frequency and introduce the Fast Gamblet Transform
(FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT
can be used to solve and diagonalize arbitrary PDEs with constant coefficients,
the FGT can be used to decompose a wide range of continuous linear operators
(including arbitrary continuous linear bijections from to or
to ) into a sequence of independent linear systems with uniformly bounded
condition numbers and leads to
solvers and eigenspace adapted Multiresolution Analysis (resulting in near
linear complexity approximation of all eigensubspaces).Comment: 142 pages. 14 Figures. Presented at AFOSR (Aug 2016), DARPA (Sep
2016), IPAM (Apr 3, 2017), Hausdorff (April 13, 2017) and ICERM (June 5,
2017
Approximation of high-dimensional parametric PDEs
Parametrized families of PDEs arise in various contexts such as inverse
problems, control and optimization, risk assessment, and uncertainty
quantification. In most of these applications, the number of parameters is
large or perhaps even infinite. Thus, the development of numerical methods for
these parametric problems is faced with the possible curse of dimensionality.
This article is directed at (i) identifying and understanding which properties
of parametric equations allow one to avoid this curse and (ii) developing and
analyzing effective numerical methodd which fully exploit these properties and,
in turn, are immune to the growth in dimensionality. The first part of this
article studies the smoothness and approximability of the solution map, that
is, the map where is the parameter value and is the
corresponding solution to the PDE. It is shown that for many relevant
parametric PDEs, the parametric smoothness of this map is typically holomorphic
and also highly anisotropic in that the relevant parameters are of widely
varying importance in describing the solution. These two properties are then
exploited to establish convergence rates of -term approximations to the
solution map for which each term is separable in the parametric and physical
variables. These results reveal that, at least on a theoretical level, the
solution map can be well approximated by discretizations of moderate
complexity, thereby showing how the curse of dimensionality is broken. This
theoretical analysis is carried out through concepts of approximation theory
such as best -term approximation, sparsity, and -widths. These notions
determine a priori the best possible performance of numerical methods and thus
serve as a benchmark for concrete algorithms. The second part of this article
turns to the development of numerical algorithms based on the theoretically
established sparse separable approximations. The numerical methods studied fall
into two general categories. The first uses polynomial expansions in terms of
the parameters to approximate the solution map. The second one searches for
suitable low dimensional spaces for simultaneously approximating all members of
the parametric family. The numerical implementation of these approaches is
carried out through adaptive and greedy algorithms. An a priori analysis of the
performance of these algorithms establishes how well they meet the theoretical
benchmarks
Snapshot-Based Methods and Algorithms
An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This two-volume handbook covers methods as well as applications. This second volume focuses on applications in engineering, biomedical engineering, computational physics and computer science
Model Order Reduction
An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This two-volume handbook covers methods as well as applications. This second volume focuses on applications in engineering, biomedical engineering, computational physics and computer science
ProblÚmes inverses d'hémodynamique. Estimation rapide des flux sanguins à partir de données médicales
This thesis presents a work at the interface between applied mathematics and biomedical engineering. The workâs main subject is the estimation of blood flows and quantities of medical interest in diagnosing certain diseases concerning the cardiovascular system. We propose a complete pipeline, providing the theoretical foundations for state estimation from medical data using reduced-order models, and addressing inter-patient variability. Extensive numerical tests are shown in realistic 3D scenarios that verify the potential impact of the work in the medical comunnity.Cette thĂšse prĂ©sente un travail Ă lâinterface entre les mathĂ©matiques appliquĂ©es et lâingĂ©nierie biomedicale. Le sujet principal en est lâestimation des Ă©coulements sanguins et de quantitĂ©s dâintĂ©rĂȘt pour le diagnostic de certaines maladies cardiovasculaires. Nous proposons une procĂ©dure complĂšte, dont nous dĂ©taillons les fondements thĂ©oriques, permettant lâestimation dâĂ©tat Ă partir de donnĂ©es mĂ©dicales en utilisant des techniques de rĂ©duction de modĂšle, et en prenant en compte la problĂ©matique de la variabilitĂ© inter-patients. De nombreux test numĂ©riques en 3D sont exposĂ©s afin de vĂ©rifier le potentiel de cette Ă©tude dans le domaine mĂ©dical