4,115 research outputs found

    Bernstein Numbers of Embeddings of Isotropic and Dominating Mixed Besov Spaces

    Full text link
    The purpose of the present paper is to investigate the decay of Bernstein numbers of the embedding from Bp1,qt((0,1)d)B^t_{p_1,q}((0,1)^d) into the space Lp2((0,1)d)L_{p_2}((0,1)^d) . The asymptotic behaviour of Bernstein numbers of the identity id:Sp1,p1tB((0,1)d)→Lp2((0,1)d)id: S_{p_1,p_1}^tB((0,1)^d)\rightarrow L_{p_2}((0,1)^d) will be also considered.Comment: 31 pages, 1 figur

    Metric entropy, n-widths, and sampling of functions on manifolds

    Full text link
    We first investigate on the asymptotics of the Kolmogorov metric entropy and nonlinear n-widths of approximation spaces on some function classes on manifolds and quasi-metric measure spaces. Secondly, we develop constructive algorithms to represent those functions within a prescribed accuracy. The constructions can be based on either spectral information or scattered samples of the target function. Our algorithmic scheme is asymptotically optimal in the sense of nonlinear n-widths and asymptotically optimal up to a logarithmic factor with respect to the metric entropy

    Weyl Numbers of Embeddings of Tensor Product Besov Spaces

    Full text link
    In this paper we investigate the asymptotic behaviour of Weyl numbers of embeddings of tensor product Besov spaces into Lebesgue spaces. These results will be compared with the known behaviour of entropy numbers.Comment: 54 pages, 2 figure

    Predictability, complexity and learning

    Full text link
    We define {\em predictive information} Ipred(T)I_{\rm pred} (T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times TT: Ipred(T)I_{\rm pred} (T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T)I_{\rm pred} (T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power--law growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and in the analysis of physical systems through statistical mechanics and dynamical systems theory. Further, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T)I_{\rm pred} (T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in different problems in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2
    • …
    corecore