318 research outputs found
H\"older Regularity of Geometric Subdivision Schemes
We present a framework for analyzing non-linear -valued
subdivision schemes which are geometric in the sense that they commute with
similarities in . It admits to establish
-regularity for arbitrary schemes of this type, and
-regularity for an important subset thereof, which includes all
real-valued schemes. Our results are constructive in the sense that they can be
verified explicitly for any scheme and any given set of initial data by a
universal procedure. This procedure can be executed automatically and
rigorously by a computer when using interval arithmetics.Comment: 31 pages, 1 figur
Quasi-interpolation in Riemannian manifolds
We consider quasi-interpolation operators for functions assuming their values in a Riemannian manifold. We construct such operators from corresponding linear quasi-interpolation operators by replacing affine averages with the Riemannian centre of mass. As a main result, we show that the approximation rate of such a nonlinear operator is the same as for the linear operator it has been derived from. In order to formulate this result in an intrinsic way, we use the Sasaki metric to compare the derivatives of the function to be approximated with the derivatives of the nonlinear approximant. Numerical experiments confirm our theoretical finding
Subdivision schemes with general dilation in the geometric and nonlinear setting
AbstractWe establish results on convergence and smoothness of subdivision rules operating on manifold-valued data which are based on a general dilation matrix. In particular we cover irregular combinatorics. For the regular grid case results are not restricted to isotropic dilation matrices. The nature of the results is that intrinsic subdivision rules which operate on geometric data inherit smoothness properties of their linear counterparts
Point-Normal Subdivision Curves and Surfaces
This paper proposes to generalize linear subdivision schemes to nonlinear
subdivision schemes for curve and surface modeling by refining vertex positions
together with refinement of unit control normals at the vertices. For each
round of subdivision, new control normals are obtained by projections of
linearly subdivided normals onto unit circle or sphere while new vertex
positions are obtained by updating linearly subdivided vertices along the
directions of the newly subdivided normals. Particularly, the new position of
each linearly subdivided vertex is computed by weighted averages of end points
of circular or helical arcs that interpolate the positions and normals at the
old vertices at one ends and the newly subdivided normal at the other ends.
The main features of the proposed subdivision schemes are three folds:
(1) The point-normal (PN) subdivision schemes can reproduce circles, circular
cylinders and spheres using control points and control normals;
(2) PN subdivision schemes generalized from convergent linear subdivision
schemes converge and can have the same smoothness orders as the linear schemes;
(3) PN subdivision schemes generalizing linear subdivision schemes that
generate subdivision surfaces with flat extraordinary points can generate
visually subdivision surfaces with non-flat extraordinary points.
Experimental examples have been given to show the effectiveness of the
proposed techniques for curve and surface modeling.Comment: 30 pages, 17 figures, 22.5M
Approximating the Derivative of Manifold-valued Functions
We consider the approximation of manifold-valued functions by embedding the
manifold into a higher dimensional space, applying a vector-valued
approximation operator and projecting the resulting vector back to the
manifold. It is well known that the approximation error for manifold-valued
functions is close to the approximation error for vector-valued functions. This
is not true anymore if we consider the derivatives of such functions. In our
paper we give pre-asymptotic error bounds for the approximation of the
derivative of manifold-valued function. In particular, we provide explicit
constants that depend on the reach of the embedded manifold.Comment: 25 pages, 5 figure
Single basepoint subdivision schemes for manifold-valued data: Timesymmetry without space-symmetry.
Abstract. This paper establishes smoothness results for a class of nonlinear subdivision schemes, known as the single basepoint manifold-valued subdivision schemes, which shows up in the construction of waveletlike transform for manifold-valued data. This class includes the (single basepoint) Log-Exp subdivision scheme as a special case. In these schemes, the exponential map is replaced by a so-called retraction map f from the tangent bundle of a manifold to the manifold. It is known that any choice of retraction map yields a C 2 scheme, provided the underlying linear scheme is C 2 (this is called "C 2 equivalence"). But when the underlying linear scheme is C 3 , Navayazdani and Yu have shown that to guarantee C 3 equivalence, a certain tensor P f associated to f must vanish. They also show that P f vanishes when the underlying manifold is a symmetric space and f is the exponential map. In the present paper, a geometric interpretation of the tensor P f is given. Associated to the retraction map f is a torsion-free affine connection, which in turn defines an exponential map. The condition P f = 0 is shown to be equivalent to the condition that f agrees with the exponential map of the connection up to the 3rd order. In particular, when f is the exponential map of a connection, one recovers the original connection and P f vanishes. It then follows that the condition P f = 0 is satisfied by a wider class of manifolds than was previously known. Under the additional assumption that the subdivision rule satisfies a time-symmetry, it is shown that the vanishing of P f also guarantees C 4 equivalence. Finally, the analysis in the paper strongly indicates that vanishing curvature of the connection associated to f is a necessary condition for C k equivalence for k ≥ 5
Learning Theory and Approximation
Learning theory studies data structures from samples and aims at understanding unknown function relations behind them. This leads to interesting theoretical problems which can be often attacked with methods from Approximation Theory. This workshop - the second one of this type at the MFO - has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning
- …