80 research outputs found
Non-equispaced B-spline wavelets
This paper has three main contributions. The first is the construction of
wavelet transforms from B-spline scaling functions defined on a grid of
non-equispaced knots. The new construction extends the equispaced,
biorthogonal, compactly supported Cohen-Daubechies-Feauveau wavelets. The new
construction is based on the factorisation of wavelet transforms into lifting
steps. The second and third contributions are new insights on how to use these
and other wavelets in statistical applications. The second contribution is
related to the bias of a wavelet representation. It is investigated how the
fine scaling coefficients should be derived from the observations. In the
context of equispaced data, it is common practice to simply take the
observations as fine scale coefficients. It is argued in this paper that this
is not acceptable for non-interpolating wavelets on non-equidistant data.
Finally, the third contribution is the study of the variance in a
non-orthogonal wavelet transform in a new framework, replacing the numerical
condition as a measure for non-orthogonality. By controlling the variances of
the reconstruction from the wavelet coefficients, the new framework allows us
to design wavelet transforms on irregular point sets with a focus on their use
for smoothing or other applications in statistics.Comment: 42 pages, 2 figure
Recommended from our members
Multiscale and High-Dimensional Problems
High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective.
The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes.
This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems
Institute for Computational Mechanics in Propulsion (ICOMP)
The Institute for Computational Mechanics in Propulsion (ICOMP) is a combined activity of Case Western Reserve University, Ohio Aerospace Institute (OAI) and NASA Lewis. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1991 are described
[Activity of Institute for Computer Applications in Science and Engineering]
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science
Operator-adapted wavelets for finite-element differential forms
We introduce in this paper an operator-adapted multiresolution analysis for finite-element differential forms. From a given continuous, linear, bijective, and self-adjoint positive-definite operator L, a hierarchy of basis functions and associated wavelets for discrete differential forms is constructed in a fine-to-coarse fashion and in quasilinear time. The resulting wavelets are L-orthogonal across all scales, and can be used to derive a Galerkin discretization of the operator such that its stiffness matrix becomes block-diagonal, with uniformly well-conditioned and sparse blocks. Because our approach applies to arbitrary differential p-forms, we can derive both scalar-valued and vector-valued wavelets block-diagonalizing a prescribed operator. We also discuss the generality of the construction by pointing out that it applies to various types of computational grids, offers arbitrary smoothness orders of basis functions and wavelets, and can accommodate linear differential constraints such as divergence-freeness. Finally, we demonstrate the benefits of the corresponding operator-adapted multiresolution decomposition for coarse-graining and model reduction of linear and non-linear partial differential equations
An operator-customized wavelet-finite element approach for the adaptive solution of second-order partial differential equations on unstructured meshes
Thesis (Ph. D.)--Massachusetts Institute of Technology, Civil and Environmental Engineering, 2005.Includes bibliographical references (p. 139-142).Unlike first-generation wavelets, second-generation wavelets can be constructed on any multi-dimensional unstructured mesh. Instead of limiting ourselves to the choice of primitive wavelets, effectively HB detail functions, we can tailor the wavelets to gain additional qualities. In particular, we propose to customize our wavelets to the problem's operator. For any given linear elliptic second-order PDE, and within a Lagrangian FE space of any given order, we can construct a basis of compactly supported wavelets that are orthogonal to the coarser basis functions with respect to the weak form of the PDE. We expose the connection between the wavelet's vanishing moment properties and the requirements for operator-orthogonality in multiple dimensions. We give examples in which we successfully eliminate all scale-coupling in the problem's multi-resolution stiffness matrix. Consequently, details can be added locally to a coarser solution without having to re-compute the coarser solution.The Finite Element Method (FEM) is a widely popular method for the numerical solution of Partial Differential Equations (PDE), on multi-dimensional unstructured meshes. Lagrangian finite elements, which preserve C⁰ continuity with interpolating piecewise-polynomial shape functions, are a common choice for second-order PDEs. Conventional single-scale methods often have difficulty in efficiently capturing fine-scale behavior (e.g. singularities or transients), without resorting to a prohibitively large number of variables. This can be done more effectively with a multi-scale method, such as the Hierarchical Basis (HB) method. However, the HB FEM generally yields a multi-resolution stiffness matrix that is coupled across scales. We propose a powerful generalization of the Hierarchical Basis: a second-generation wavelet basis, spanning a Lagrangian finite element space of any given polynomial order.by Stefan F. D'Heedene.Ph.D
Operator-adapted finite element wavelets : theory and applications to a posteriori error estimation and adaptive computational modeling
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2005.Includes bibliographical references (leaves 166-171).We propose a simple and unified approach for a posteriori error estimation and adaptive mesh refinement in finite element analysis using multiresolution signal processing principles. Given a sequence of nested discretizations of a domain we begin by constructing approximation spaces at each level of discretization spanned by conforming finite element interpolation functions. The solution to the virtual work equation can then be expressed as a telescopic sum consisting of the solution on the coarsest mesh along with a sequence of error terms denoted as two-level errors. These error terms are the projections of the solution onto complementary spaces that are scale-orthogonal with respect to the inner product induced by the weak-form of the governing differential operator. The problem of generating a compact, yet accurate representation of the solution then reduces to that of generating a compact, yet accurate representation of each of these error components. This problem is solved in three steps: (a) we first efficiently construct a set of scale-orthogonal wavelets that form a Riesz stable basis (in the energy-norm) for the complementary spaces; (b) we then efficiently estimate the contribution of each wavelet to the two-level error and finally (c) we select a subset of the wavelets at each level to preserve and solve exactly for the corresponding coefficients. Our approach has several advantages over a posteriori error estimation and adaptive refinement techniques in vogue in finite element analysis. First, in contrast to the true error, the two-level errors can be estimated very accurately even on coarse meshes. Second, mesh refinement is carried out by the addition of wavelets rather than element subdivision.(cont.) This implies that the technique does not have to directly deal with the handling of irregular vertices. Third, the error estimation and adaptive refinement steps use the same basis. Therefore, the estimates accurately predict how much the error will reduce upon mesh refinement. Finally, the proposed approach naturally and easily accommodates error estimation and adaptive refinement based on both the energy norm as well any bounded linear functional of interest (i.e., goal-oriented error estimation and adaptivity). We demonstrate the application of our approach to the adaptive solution of second and fourth- order problems such as heat transfer, linear elasticity and deformation of thin plates.by Raghunathan Sudarshan.Ph.D
Semiannual report, 1 October 1990 - 31 March 1991
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized
Adaptive Scattered Data Fitting with Tensor Product Spline-Wavelets
The core of the work we present here is an algorithm that constructs a least squares approximation to a given set of unorganized points. The approximation is expressed as a linear combination of particular B-spline wavelets. It implies a multiresolution setting which constructs a hierarchy of approximations to the data with increasing level of detail, proceeding from coarsest to finest scales. It allows for an efficient selection of the degrees of freedom of the problem and avoids the introduction of an artificial uniform grid. In fact, an analysis of the data can be done at each of the scales of the hierarchy, which can be used to select adaptively a set of wavelets that can represent economically the characteristics of the cloud of points in the next level of detail. The data adaption of our method is twofold, as it takes into account both horizontal distribution and vertical irregularities of data. This strategy can lead to a striking reduction of the problem complexity. Furthermore, among the possible ways to achieve a multiscale formulation, the wavelet approach shows additional advantages, based on good conditioning properties and level-wise orthogonality. We exploit these features to enhance the efficiency of iterative solution methods for the system of normal equations of the problem. The combination of multiresolution adaptivity with the numerical properties of the wavelet basis gives rise to an algorithm well suited to cope with problems requiring fast solution methods. We illustrate this by means of numerical experiments that compare the performance of the method on various data sets working with different multi-resolution bases. Afterwards, we use the equivalence relation between wavelets and Besov spaces to formulate the problem of data fitting with regularization. We find that the multiscale formulation allows for a flexible and efficient treatment of some aspects of this problem. Moreover, we study the problem known as robust fitting, in which the data is assumed to be corrupted by wrong measurements or outliers. We compare classical methods based on re-weighting of residuals to our setting in which the wavelet representation of the data computed by our algorithm is used to locate the outliers. As a final application that couples two of the main applications of wavelets (data analysis and operator equations), we propose the use of this least squares data fitting method to evaluate the non-linear term in the wavelet-Galerkin formulation of non-linear PDE problems. At the end of this thesis we discuss efficient implementation issues, with a special interest in the interplay between solution methods and data structures
- …