971 research outputs found
Hyperbolic Cross Approximation
Hyperbolic cross approximation is a special type of multivariate
approximation. Recently, driven by applications in engineering, biology,
medicine and other areas of science new challenging problems have appeared. The
common feature of these problems is high dimensions. We present here a survey
on classical methods developed in multivariate approximation theory, which are
known to work very well for moderate dimensions and which have potential for
applications in really high dimensions. The theory of hyperbolic cross
approximation and related theory of functions with mixed smoothness are under
detailed study for more than 50 years. It is now well understood that this
theory is important both for theoretical study and for practical applications.
It is also understood that both theoretical analysis and construction of
practical algorithms are very difficult problems. This explains why many
fundamental problems in this area are still unsolved. Only a few survey papers
and monographs on the topic are published. This and recently discovered deep
connections between the hyperbolic cross approximation (and related sparse
grids) and other areas of mathematics such as probability, discrepancy, and
numerical integration motivated us to write this survey. We try to put emphases
on the development of ideas and methods rather than list all the known results
in the area. We formulate many problems, which, to our knowledge, are open
problems. We also include some very recent results on the topic, which
sometimes highlight new interesting directions of research. We hope that this
survey will stimulate further active research in this fascinating and
challenging area of approximation theory and numerical analysis.Comment: 185 pages, 24 figure
Optimal sampling recovery of mixed order Sobolev embeddings via discrete Littlewood-Paley type characterizations
In this paper we consider the -approximation of multivariate periodic
functions with -bounded mixed derivative (difference). The (possibly
non-linear) reconstruction algorithm is supposed to recover the function from
function values, sampled on a discrete set of sampling nodes. The general
performance is measured in terms of (non-)linear sampling widths .
We conduct a systematic analysis of Smolyak type interpolation algorithms in
the framework of Besov-Lizorkin-Triebel spaces of dominating mixed smoothness
based on specifically tailored discrete Littlewood-Paley type
characterizations. As a consequence, we provide sharp upper bounds for the
asymptotic order of the (non-)linear sampling widths in various situations and
close some gaps in the existing literature. For example, in case the linear sampling widths
and
show
the asymptotic behavior of the corresponding Gelfand -widths, whereas in
case the linear sampling widths match the
corresponding linear widths. In the mentioned cases linear Smolyak
interpolation based on univariate classical trigonometric interpolation turns
out to be optimal
Kolmogorov n-Widths of Function Classes Induced by a Non-Degenerate Differential Operator: A Convex Duality Approach
Let be the differential operator induced by a polynomial , and let
be the class of multivariate periodic functions such that
. The problem of computing the asymptotic order of the
Kolmogorov -width in the general case when
is compactly embedded into has been open for a long time.
In the present paper, we use convex analytical tools to solve it in the case
when is non-degenerate
Sampling and cubature on sparse grids based on a B-spline quasi-interpolation
Let be a set of points in the -cube ,
and a family of functions on .
We consider the approximate recovery functions on from the
sampled values , by the linear sampling algorithm
\begin{equation} \nonumber L_n(X_n,\Phi_n,f) \ := \ \sum_{j=1}^n
f(x^j)\varphi_j. \end{equation} The error of sampling recovery is measured in
the norm of the space -norm or the energy norm of the isotropic
Sobolev sapce for .
Functions to be recovered are from the unit ball in Besov type spaces of an
anisotropic smoothness, in particular, spaces of a nonuniform
mixed smoothness , and spaces
of a "hybrid" of mixed smoothness
and isotropic smoothness . We constructed optimal linear
sampling algorithms on special sparse grids
and a family of linear combinations of integer or half integer
translated dilations of tensor products of B-splines. We computed the
asymptotic of the error of the optimal recovery. This construction is based on
a B-spline quasi-interpolation representations of functions in
and . As consequences we obtained the asymptotic
of optimal cubature formulas for numerical integration of functions from the
unit ball of these Besov type spaces.Comment: arXiv admin note: text overlap with arXiv:1009.438
Mixed Moduli of Smoothness in ,
In this paper we survey recent developments over the last 25 years on the
mixed fractional moduli of smoothness of periodic functions from ,
. In particular, the paper includes monotonicity properties,
equivalence and realization results, sharp Jackson, Marchaud, and Ul'yanov
inequalities, interrelations between the moduli of smoothness, the Fourier
coefficients, and "angular" approximation. The sharpness of the results
presented is discussed
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality
Deep learning has shown high performances in various types of tasks from
visual recognition to natural language processing, which indicates superior
flexibility and adaptivity of deep learning. To understand this phenomenon
theoretically, we develop a new approximation and estimation error analysis of
deep learning with the ReLU activation for functions in a Besov space and its
variant with mixed smoothness. The Besov space is a considerably general
function space including the Holder space and Sobolev space, and especially can
capture spatial inhomogeneity of smoothness. Through the analysis in the Besov
space, it is shown that deep learning can achieve the minimax optimal rate and
outperform any non-adaptive (linear) estimator such as kernel ridge regression,
which shows that deep learning has higher adaptivity to the spatial
inhomogeneity of the target function than other estimators such as linear ones.
In addition to this, it is shown that deep learning can avoid the curse of
dimensionality if the target function is in a mixed smooth Besov space. We also
show that the dependency of the convergence rate on the dimensionality is tight
due to its minimax optimality. These results support high adaptivity of deep
learning and its superior ability as a feature extractor
Sampling on energy-norm based sparse grids for the optimal recovery of Sobolev type functions in
We investigate the rate of convergence of linear sampling numbers of the
embedding . Here governs the mixed smoothness and the
isotropic smoothness in the space of hybrid
smoothness, whereas denotes the isotropic Sobolev
space. If we obtain sharp polynomial decay rates for the first
embedding realized by sampling operators based on "energy-norm based sparse
grids" for the classical trigonometric interpolation. This complements earlier
work by Griebel, Knapek and D\~ung, Ullrich, where general linear
approximations have been considered. In addition, we study the embedding
and achieve optimality for Smolyak's algorithm applied to the classical
trigonometric interpolation. This can be applied to investigate the sampling
numbers for the embedding for where again Smolyak's algorithm yields
the optimal order. The precise decay rates for the sampling numbers in the
mentioned situations always coincide with those for the approximation numbers,
except probably in the limiting situation (including the
embedding into ). The best what we could prove there is a
(probably) non-sharp results with a logarithmic gap between lower and upper
bound
Volumes of unit balls of mixed sequence spaces
The volume of the unit ball of the Lebesgue sequence space is very
well known since the times of Dirichlet. We calculate the volume of the unit
ball in the mixed norm , whose special cases are nowadays
popular in machine learning under the name of group lasso. We consider the real
as well as the complex case. The result is given by a closed formula involving
the gamma function, only slightly more complicated than the one of Dirichlet.
We close by an overview of open problems
Tight error bounds for rank-1 lattice sampling in spaces of hybrid mixed smoothness
We consider the approximate recovery of multivariate periodic functions from
a discrete set of function values taken on a rank- integration lattice. The
main result is the fact that any (non-)linear reconstruction algorithm taking
function values on a rank- lattice of size has a dimension-independent
lower bound of when considering the optimal
worst-case error with respect to function spaces of (hybrid) mixed smoothness
on the -torus. We complement this lower bound with upper bounds
that coincide up to logarithmic terms. These upper bounds are obtained by a
detailed analysis of a rank-1 lattice sampling strategy, where the rank-1
lattices are constructed by a component-by-component (CBC) method. This
improves on earlier results obtained in [25] and [27]. The lattice (group)
structure allows for an efficient approximation of the underlying function from
its sampled values using a single one-dimensional fast Fourier transform. This
is one reason why these algorithms keep attracting significant interest. We
compare our results to recent (almost) optimal methods based upon samples on
sparse grids
The role of Frolov's cubature formula for functions with bounded mixed derivative
We prove upper bounds on the order of convergence of Frolov's cubature
formula for numerical integration in function spaces of dominating mixed
smoothness on the unit cube with homogeneous boundary condition. More
precisely, we study worst-case integration errors for Besov
and Triebel-Lizorkin spaces
and our results treat the whole range of admissible parameters .
In particular, we obtain upper bounds for the difficult the case of small
smoothness which is given for Triebel-Lizorkin spaces
in case with . The presented upper
bounds on the worst-case error show a completely different behavior compared to
"large" smoothness . In the latter case the presented upper bounds
are optimal, i.e., they can not be improved by any other cubature formula. The
optimality for "small" smoothness is open.Comment: 23 page
- …