42 research outputs found

    Exponential Approximation of Bandlimited Functions from Average Oversampling

    Full text link
    Weighted average sampling is more practical and numerically more stable than sampling at single points as in the classical Shannon sampling framework. Using the frame theory, one can completely reconstruct a bandlimited function from its suitably-chosen average sample data. When only finitely many sample data are available, truncating the complete reconstruction series with the standard dual frame results in very slow convergence. We present in this note a method of reconstructing a bandlimited function from finite average oversampling with an exponentially-decaying approximation error

    Multidimensional Analytic Signals and the Bedrosian Identity

    Full text link
    The analytic signal method via the Hilbert transform is a key tool in signal analysis and processing, especially in the time-frquency analysis. Imaging and other applications to multidimensional signals call for extension of the method to higher dimensions. We justify the usage of partial Hilbert transforms to define multidimensional analytic signals from both engineering and mathematical perspectives. The important associated Bedrosian identity T(fg)=fTgT(fg)=fTg for partial Hilbert transforms TT are then studied. Characterizations and several necessity theorems are established. We also make use of the identity to construct basis functions for the time-frequency analysis

    Convergence Analysis of the Gaussian Regularized Shannon Sampling Formula

    Full text link
    We consider the reconstruction of a bandlimited function from its finite localized sample data. Truncating the classical Shannon sampling series results in an unsatisfactory convergence rate due to the slow decayness of the sinc function. To overcome this drawback, a simple and highly effective method, called the Gaussian regularization of the Shannon series, was proposed in the engineering and has received remarkable attention. It works by multiplying the sinc function in the Shannon series with a regularized Gaussian function. L. Qian (Proc. Amer. Math. Soc., 2003) established the convergence rate of O(nexp⁑(βˆ’Ο€βˆ’Ξ΄2n))O(\sqrt{n}\exp(-\frac{\pi-\delta}2n)) for this method, where Ξ΄<Ο€\delta<\pi is the bandwidth and nn is the number of sample data. C. Micchelli {\it et al.} (J. Complexity, 2009) proposed a different regularized method and obtained the corresponding convergence rate of O(1nexp⁑(βˆ’Ο€βˆ’Ξ΄2n))O(\frac1{\sqrt{n}}\exp(-\frac{\pi-\delta}2n)). This latter rate is by far the best among all regularized methods for the Shannon series. However, their regularized method involves the solving of a linear system and is implicit and more complicated. The main objective of this note is to show that the Gaussian regularization of the Shannon series can also achieve the same best convergence rate as that by C. Micchelli {\it et al}. We also show that the Gaussian regularization method can improve the convergence rate for the useful average sampling. Finally, the outstanding performance of numerical experiments justifies our results

    Exponential Approximation of Bandlimited Random Processes from Oversampling

    Full text link
    The Shannon sampling theorem for bandlimited wide sense stationary random processes was established in 1957, which and its extensions to various random processes have been widely studied since then. However, truncation of the Shannon series suffers the drawback of slow convergence. Specifically, it is well-known that the mean-square approximation error of the truncated series at nn points sampled at the exact Nyquist rate is of the order O(1n)O(\frac1{\sqrt{n}}). We consider the reconstruction of bandlimited random processes from finite oversampling points, namely, the distance between consecutive points is less than the Nyquist sampling rate. The optimal deterministic linear reconstruction method and the associated intrinsic approximation error are studied. It is found that one can achieve exponentially-decaying (but not faster) approximation errors from oversampling. Two practical reconstruction methods with exponential approximation ability are also presented

    Universalities of Reproducing Kernels Revisited

    Full text link
    Kernel methods have been widely applied to machine learning and other questions of approximating an unknown function from its finite sample data. To ensure arbitrary accuracy of such approximation, various denseness conditions are imposed on the selected kernel. This note contributes to the study of universal, characteristic, and C0C_0-universal kernels. We first give simple and direct description of the difference and relation among these three kinds of universalities of kernels. We then focus on translation-invariant and weighted polynomial kernels. A simple and shorter proof of the known characterization of characteristic translation-invariant kernels will be presented. The main purpose of the note is to give a delicate discussion on the universalities of weighted polynomial kernels

    Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression

    Full text link
    A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the l1 norm can be applied to improve the learning rate estimate of l1-regularization in machine learning

    Vector-valued Reproducing Kernel Banach Spaces with Group Lasso Norms

    Full text link
    Aiming at a mathematical foundation for kernel methods in coefficient regularization for multi-task learning, we investigate theory of vector-valued reproducing kernel Banach spaces (RKBS) with L_{p,1}-norms, which contains the sparse learning scheme p=1p=1 and the group lasso p=2. We construct RKBSs that are equipped with such group lasso norms and admit the linear representer theorem for regularized learning schemes. The corresponding kernels that are admissible for the construction are discussed

    Existence of the Bedrosian Identity for Singular Integral Operators

    Full text link
    The Hilbert transform HH satisfies the Bedrosian identity H(fg)=fHgH(fg)=fHg whenever the supports of the Fourier transforms of f,g∈L2(R)f,g\in L^2(R) are respectively contained in A=[βˆ’a,b]A=[-a,b] and B=Rβˆ–(βˆ’b,a)B=R\setminus(-b,a), 0≀a,b≀+∞0\le a,b\le+\infty. Attracted by this interesting result arising from the time-frequency analysis, we investigate the existence of such an identity for a general bounded singular integral operator on L2(Rd)L^2(R^d) and for general support sets AA and BB. A geometric characterization of the support sets for the existence of the Bedrosian identity is established. Moreover, the support sets for the partial Hilbert transforms are all found. In particular, for the Hilbert transform to satisfy the Bedrosian identity, the support sets must be given as above

    Multi-task Learning in Vector-valued Reproducing Kernel Banach Spaces with the β„“1\ell^1 Norm

    Full text link
    Targeting at sparse multi-task learning, we consider regularization models with an β„“1\ell^1 penalty on the coefficients of kernel functions. In order to provide a kernel method for this model, we construct a class of vector-valued reproducing kernel Banach spaces with the β„“1\ell^1 norm. The notion of multi-task admissible kernels is proposed so that the constructed spaces could have desirable properties including the crucial linear representer theorem. Such kernels are related to bounded Lebesgue constants of a kernel interpolation question. We study the Lebesgue constant of multi-task kernels and provide examples of admissible kernels. Furthermore, we present numerical experiments for both synthetic data and real-world benchmark data to demonstrate the advantages of the proposed construction and regularization models

    Vector-valued Reproducing Kernel Banach Spaces with Applications to Multi-task Learning

    Full text link
    Motivated by multi-task machine learning with Banach spaces, we propose the notion of vector-valued reproducing kernel Banach spaces (RKBS). Basic properties of the spaces and the associated reproducing kernels are investigated. We also present feature map constructions and several concrete examples of vector-valued RKBS. The theory is then applied to multi-task machine learning. Especially, the representer theorem and characterization equations for the minimizer of regularized learning schemes in vector-valued RKBS are established
    corecore