206 research outputs found
Sparse grid quadrature on products of spheres
We examine sparse grid quadrature on weighted tensor products (WTP) of
reproducing kernel Hilbert spaces on products of the unit sphere, in the case
of worst case quadrature error for rules with arbitrary quadrature weights. We
describe a dimension adaptive quadrature algorithm based on an algorithm of
Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowski's
WTP algorithm (1999), here called the WW algorithm. We prove that the dimension
adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no
greater in cost than the WW algorithm. Both algorithms therefore have the
optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and
Wozniakowski (1999). A numerical example shows that, even though the asymptotic
convergence rate is optimal, if the dimension weights decay slowly enough, and
the dimensionality of the problem is large enough, the initial convergence of
the dimension adaptive algorithm can be slow.Comment: 34 pages, 6 figures. Accepted 7 January 2015 for publication in
Numerical Algorithms. Revised at page proof stage to (1) update email
address; (2) correct the accent on "Wozniakowski" on p. 7; (3) update
reference 2; (4) correct references 3, 18 and 2
Numerics and Fractals
Local iterated function systems are an important generalisation of the
standard (global) iterated function systems (IFSs). For a particular class of
mappings, their fixed points are the graphs of local fractal functions and
these functions themselves are known to be the fixed points of an associated
Read-Bajactarevi\'c operator. This paper establishes existence and properties
of local fractal functions and discusses how they are computed. In particular,
it is shown that piecewise polynomials are a special case of local fractal
functions. Finally, we develop a method to compute the components of a local
IFS from data or (partial differential) equations.Comment: version 2: minor updates and section 6.1 rewritten, arXiv admin note:
substantial text overlap with arXiv:1309.0243. text overlap with
arXiv:1309.024
Convergence rates in ℓ¹-regularization when the basis is not smooth enough
Sparsity promoting regularization is an important technique for signal reconstruction and several other ill-posed problems. Theoretical investigation typically bases on the assumption that the unknown solution has a sparse representation with respect to a fixed basis. We drop this sparsity assumption and provide error estimates for nonsparse solutions. After discussing a result in this direction published earlier by one of the authors and co-authors, we prove a similar error estimate under weaker assumptions. Two examples illustrate that this set of weaker assumptions indeed covers additional situations which appear in applications.J. Flemming was supported by the German Science Foundation (DFG) under grant FL 832/1-1. M.
Hegland was partially supported by the Technische Universität München Institute of Advanced Study,
funded by the German Excellence Initiative. Work on this article was partially conducted during a
stay of M. Hegland at TU Chemnitz, supported by the German Science Foundation (DFG) under grant
HO 1454/8-1
Generalized Gearhart-Koshy acceleration is a Krylov space method of a new type
The Gearhart-Koshy acceleration for the Kaczmarz method for linear systems is
a line-search with the unusual property that it does not minimize the residual,
but the error. Recently one of the authors generalized the this acceleration
from a line-search to a search in affine subspaces.
In this paper, we demonstrate that the affine search is a Krylov space method
that is neither a CG-type nor a MINRES-type method, and we prove that it is
mathematically equivalent with a more canonical Gram-Schmidt-based method. We
also investigate what abstract property of the Kaczmarz method enables this
type of algorithm, and we conclude with a simple numerical example
- …