971 research outputs found

    Hyperbolic Cross Approximation

    Full text link
    Hyperbolic cross approximation is a special type of multivariate approximation. Recently, driven by applications in engineering, biology, medicine and other areas of science new challenging problems have appeared. The common feature of these problems is high dimensions. We present here a survey on classical methods developed in multivariate approximation theory, which are known to work very well for moderate dimensions and which have potential for applications in really high dimensions. The theory of hyperbolic cross approximation and related theory of functions with mixed smoothness are under detailed study for more than 50 years. It is now well understood that this theory is important both for theoretical study and for practical applications. It is also understood that both theoretical analysis and construction of practical algorithms are very difficult problems. This explains why many fundamental problems in this area are still unsolved. Only a few survey papers and monographs on the topic are published. This and recently discovered deep connections between the hyperbolic cross approximation (and related sparse grids) and other areas of mathematics such as probability, discrepancy, and numerical integration motivated us to write this survey. We try to put emphases on the development of ideas and methods rather than list all the known results in the area. We formulate many problems, which, to our knowledge, are open problems. We also include some very recent results on the topic, which sometimes highlight new interesting directions of research. We hope that this survey will stimulate further active research in this fascinating and challenging area of approximation theory and numerical analysis.Comment: 185 pages, 24 figure

    Optimal sampling recovery of mixed order Sobolev embeddings via discrete Littlewood-Paley type characterizations

    Full text link
    In this paper we consider the LqL_q-approximation of multivariate periodic functions ff with LpL_p-bounded mixed derivative (difference). The (possibly non-linear) reconstruction algorithm is supposed to recover the function from function values, sampled on a discrete set of nn sampling nodes. The general performance is measured in terms of (non-)linear sampling widths ϱn\varrho_n. We conduct a systematic analysis of Smolyak type interpolation algorithms in the framework of Besov-Lizorkin-Triebel spaces of dominating mixed smoothness based on specifically tailored discrete Littlewood-Paley type characterizations. As a consequence, we provide sharp upper bounds for the asymptotic order of the (non-)linear sampling widths in various situations and close some gaps in the existing literature. For example, in case 2≤p1/p2\leq p1/p the linear sampling widths ϱnlin(SprW(Td),Lq(Td))\varrho_n^{\text{lin}}(S^r_pW(\mathbb{T}^d),L_q(\mathbb{T}^d)) and ϱnlin(Sp,∞rB(Td),Lq(Td))\varrho^{\text{lin}}_n(S^r_{p,\infty}B(\mathbb{T}^d),L_q(\mathbb{T}^d)) show the asymptotic behavior of the corresponding Gelfand nn-widths, whereas in case 11/p1 1/p the linear sampling widths match the corresponding linear widths. In the mentioned cases linear Smolyak interpolation based on univariate classical trigonometric interpolation turns out to be optimal

    Kolmogorov n-Widths of Function Classes Induced by a Non-Degenerate Differential Operator: A Convex Duality Approach

    Full text link
    Let P(D)P(D) be the differential operator induced by a polynomial PP, and let U2[P]{U^{[P]}_2} be the class of multivariate periodic functions ff such that ∥P(D)(f)∥2≤1\|P(D)(f)\|_2\leq 1. The problem of computing the asymptotic order of the Kolmogorov nn-width dn(U2[P],L2)d_n({U^{[P]}_2},L_2) in the general case when U2[P]{U^{[P]}_2} is compactly embedded into L2L_2 has been open for a long time. In the present paper, we use convex analytical tools to solve it in the case when P(D)P(D) is non-degenerate

    Sampling and cubature on sparse grids based on a B-spline quasi-interpolation

    Full text link
    Let Xn={xj}j=1nX_n = \{x^j\}_{j=1}^n be a set of nn points in the dd-cube [0,1]d[0,1]^d, and Φn={φj}j=1n\Phi_n = \{\varphi_j\}_{j =1}^n a family of nn functions on [0,1]d[0,1]^d. We consider the approximate recovery functions ff on [0,1]d[0,1]^d from the sampled values f(x1),...,f(xn)f(x^1), ..., f(x^n), by the linear sampling algorithm \begin{equation} \nonumber L_n(X_n,\Phi_n,f) \ := \ \sum_{j=1}^n f(x^j)\varphi_j. \end{equation} The error of sampling recovery is measured in the norm of the space Lq([0,1]d)L_q([0,1]^d)-norm or the energy norm of the isotropic Sobolev sapce Wqγ([0,1]d)W^\gamma_q([0,1]^d) for 000 0. Functions ff to be recovered are from the unit ball in Besov type spaces of an anisotropic smoothness, in particular, spaces Bp,θaB^a_{p,\theta} of a nonuniform mixed smoothness a∈R+da \in {\mathbb R}^d_+, and spaces Bp,θα,βB^{\alpha,\beta}_{p,\theta} of a "hybrid" of mixed smoothness α>0\alpha > 0 and isotropic smoothness β∈R\beta \in \mathbb R. We constructed optimal linear sampling algorithms Ln(Xn∗,Φn∗,⋅)L_n(X_n^*,\Phi_n^*,\cdot) on special sparse grids Xn∗X_n^* and a family Φn∗\Phi_n^* of linear combinations of integer or half integer translated dilations of tensor products of B-splines. We computed the asymptotic of the error of the optimal recovery. This construction is based on a B-spline quasi-interpolation representations of functions in Bp,θaB^a_{p,\theta} and Bp,θα,βB^{\alpha,\beta}_{p,\theta}. As consequences we obtained the asymptotic of optimal cubature formulas for numerical integration of functions from the unit ball of these Besov type spaces.Comment: arXiv admin note: text overlap with arXiv:1009.438

    Mixed Moduli of Smoothness in LpL_p, 1<p<∞1<p<\infty

    Full text link
    In this paper we survey recent developments over the last 25 years on the mixed fractional moduli of smoothness of periodic functions from LpL_p, 1<p<∞1<p<\infty. In particular, the paper includes monotonicity properties, equivalence and realization results, sharp Jackson, Marchaud, and Ul'yanov inequalities, interrelations between the moduli of smoothness, the Fourier coefficients, and "angular" approximation. The sharpness of the results presented is discussed

    Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality

    Full text link
    Deep learning has shown high performances in various types of tasks from visual recognition to natural language processing, which indicates superior flexibility and adaptivity of deep learning. To understand this phenomenon theoretically, we develop a new approximation and estimation error analysis of deep learning with the ReLU activation for functions in a Besov space and its variant with mixed smoothness. The Besov space is a considerably general function space including the Holder space and Sobolev space, and especially can capture spatial inhomogeneity of smoothness. Through the analysis in the Besov space, it is shown that deep learning can achieve the minimax optimal rate and outperform any non-adaptive (linear) estimator such as kernel ridge regression, which shows that deep learning has higher adaptivity to the spatial inhomogeneity of the target function than other estimators such as linear ones. In addition to this, it is shown that deep learning can avoid the curse of dimensionality if the target function is in a mixed smooth Besov space. We also show that the dependency of the convergence rate on the dimensionality is tight due to its minimax optimality. These results support high adaptivity of deep learning and its superior ability as a feature extractor

    Sampling on energy-norm based sparse grids for the optimal recovery of Sobolev type functions in HγH^\gamma

    Full text link
    We investigate the rate of convergence of linear sampling numbers of the embedding Hα,β(Td)↪Hγ(Td)H^{\alpha,\beta} (\mathbb{T}^d) \hookrightarrow H^\gamma (\mathbb{T}^d). Here α\alpha governs the mixed smoothness and β\beta the isotropic smoothness in the space Hα,β(Td)H^{\alpha,\beta}(\mathbb{T}^d) of hybrid smoothness, whereas Hγ(Td)H^{\gamma}(\mathbb{T}^d) denotes the isotropic Sobolev space. If γ>β\gamma>\beta we obtain sharp polynomial decay rates for the first embedding realized by sampling operators based on "energy-norm based sparse grids" for the classical trigonometric interpolation. This complements earlier work by Griebel, Knapek and D\~ung, Ullrich, where general linear approximations have been considered. In addition, we study the embedding Hmixα(Td)↪Hmixγ(Td)H^\alpha_{mix} (\mathbb{T}^d) \hookrightarrow H^{\gamma}_{mix}(\mathbb{T}^d) and achieve optimality for Smolyak's algorithm applied to the classical trigonometric interpolation. This can be applied to investigate the sampling numbers for the embedding Hmixα(Td)↪Lq(Td)H^\alpha_{mix} (\mathbb{T}^d) \hookrightarrow L_q(\mathbb{T}^d) for 2<q≤∞2<q\leq \infty where again Smolyak's algorithm yields the optimal order. The precise decay rates for the sampling numbers in the mentioned situations always coincide with those for the approximation numbers, except probably in the limiting situation β=γ\beta = \gamma (including the embedding into L2(Td)L_2(\mathbb{T}^d)). The best what we could prove there is a (probably) non-sharp results with a logarithmic gap between lower and upper bound

    Volumes of unit balls of mixed sequence spaces

    Full text link
    The volume of the unit ball of the Lebesgue sequence space â„“pm\ell_p^m is very well known since the times of Dirichlet. We calculate the volume of the unit ball in the mixed norm â„“qn(â„“pm)\ell^n_q(\ell_p^m), whose special cases are nowadays popular in machine learning under the name of group lasso. We consider the real as well as the complex case. The result is given by a closed formula involving the gamma function, only slightly more complicated than the one of Dirichlet. We close by an overview of open problems

    Tight error bounds for rank-1 lattice sampling in spaces of hybrid mixed smoothness

    Full text link
    We consider the approximate recovery of multivariate periodic functions from a discrete set of function values taken on a rank-ss integration lattice. The main result is the fact that any (non-)linear reconstruction algorithm taking function values on a rank-ss lattice of size MM has a dimension-independent lower bound of 2−(α+1)/2M−α/22^{-(\alpha+1)/2} M^{-\alpha/2} when considering the optimal worst-case error with respect to function spaces of (hybrid) mixed smoothness α>0\alpha>0 on the dd-torus. We complement this lower bound with upper bounds that coincide up to logarithmic terms. These upper bounds are obtained by a detailed analysis of a rank-1 lattice sampling strategy, where the rank-1 lattices are constructed by a component-by-component (CBC) method. This improves on earlier results obtained in [25] and [27]. The lattice (group) structure allows for an efficient approximation of the underlying function from its sampled values using a single one-dimensional fast Fourier transform. This is one reason why these algorithms keep attracting significant interest. We compare our results to recent (almost) optimal methods based upon samples on sparse grids

    The role of Frolov's cubature formula for functions with bounded mixed derivative

    Full text link
    We prove upper bounds on the order of convergence of Frolov's cubature formula for numerical integration in function spaces of dominating mixed smoothness on the unit cube with homogeneous boundary condition. More precisely, we study worst-case integration errors for Besov Bp,θs\mathbf{B}^s_{p,\theta} and Triebel-Lizorkin spaces Fp,θs\mathbf{F}^s_{p,\theta} and our results treat the whole range of admissible parameters (s≥1/p)(s\geq 1/p). In particular, we obtain upper bounds for the difficult the case of small smoothness which is given for Triebel-Lizorkin spaces Fp,θs\mathbf{F}^s_{p,\theta} in case 1<θ<p<∞1<\theta<p<\infty with 1/p<s≤1/θ1/p<s\leq 1/\theta. The presented upper bounds on the worst-case error show a completely different behavior compared to "large" smoothness s>1/θs>1/\theta. In the latter case the presented upper bounds are optimal, i.e., they can not be improved by any other cubature formula. The optimality for "small" smoothness is open.Comment: 23 page
    • …
    corecore