6,926 research outputs found
Local interpolation schemes for landmark-based image registration: a comparison
In this paper we focus, from a mathematical point of view, on properties and
performances of some local interpolation schemes for landmark-based image
registration. Precisely, we consider modified Shepard's interpolants,
Wendland's functions, and Lobachevsky splines. They are quite unlike each
other, but all of them are compactly supported and enjoy interesting
theoretical and computational properties. In particular, we point out some
unusual forms of the considered functions. Finally, detailed numerical
comparisons are given, considering also Gaussians and thin plate splines, which
are really globally supported but widely used in applications
The Complexity of Optimizing over a Simplex, Hypercube or Sphere: A Short Survey
We consider the computational complexity of optimizing various classes of continuous functions over a simplex, hypercube or sphere.These relatively simple optimization problems have many applications.We review known approximation results as well as negative (inapproximability) results from the recent literature.computational complexity;global optimization;linear and semidefinite programming;approximation algorithms
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
- ā¦