42 research outputs found
Exponential Approximation of Bandlimited Functions from Average Oversampling
Weighted average sampling is more practical and numerically more stable than
sampling at single points as in the classical Shannon sampling framework. Using
the frame theory, one can completely reconstruct a bandlimited function from
its suitably-chosen average sample data. When only finitely many sample data
are available, truncating the complete reconstruction series with the standard
dual frame results in very slow convergence. We present in this note a method
of reconstructing a bandlimited function from finite average oversampling with
an exponentially-decaying approximation error
Multidimensional Analytic Signals and the Bedrosian Identity
The analytic signal method via the Hilbert transform is a key tool in signal
analysis and processing, especially in the time-frquency analysis. Imaging and
other applications to multidimensional signals call for extension of the method
to higher dimensions. We justify the usage of partial Hilbert transforms to
define multidimensional analytic signals from both engineering and mathematical
perspectives. The important associated Bedrosian identity for
partial Hilbert transforms are then studied. Characterizations and several
necessity theorems are established. We also make use of the identity to
construct basis functions for the time-frequency analysis
Convergence Analysis of the Gaussian Regularized Shannon Sampling Formula
We consider the reconstruction of a bandlimited function from its finite
localized sample data. Truncating the classical Shannon sampling series results
in an unsatisfactory convergence rate due to the slow decayness of the sinc
function. To overcome this drawback, a simple and highly effective method,
called the Gaussian regularization of the Shannon series, was proposed in the
engineering and has received remarkable attention. It works by multiplying the
sinc function in the Shannon series with a regularized Gaussian function. L.
Qian (Proc. Amer. Math. Soc., 2003) established the convergence rate of
for this method, where is
the bandwidth and is the number of sample data. C. Micchelli {\it et al.}
(J. Complexity, 2009) proposed a different regularized method and obtained the
corresponding convergence rate of
. This latter rate is by far the
best among all regularized methods for the Shannon series. However, their
regularized method involves the solving of a linear system and is implicit and
more complicated. The main objective of this note is to show that the Gaussian
regularization of the Shannon series can also achieve the same best convergence
rate as that by C. Micchelli {\it et al}. We also show that the Gaussian
regularization method can improve the convergence rate for the useful average
sampling. Finally, the outstanding performance of numerical experiments
justifies our results
Exponential Approximation of Bandlimited Random Processes from Oversampling
The Shannon sampling theorem for bandlimited wide sense stationary random
processes was established in 1957, which and its extensions to various random
processes have been widely studied since then. However, truncation of the
Shannon series suffers the drawback of slow convergence. Specifically, it is
well-known that the mean-square approximation error of the truncated series at
points sampled at the exact Nyquist rate is of the order
. We consider the reconstruction of bandlimited random
processes from finite oversampling points, namely, the distance between
consecutive points is less than the Nyquist sampling rate. The optimal
deterministic linear reconstruction method and the associated intrinsic
approximation error are studied. It is found that one can achieve
exponentially-decaying (but not faster) approximation errors from oversampling.
Two practical reconstruction methods with exponential approximation ability are
also presented
Universalities of Reproducing Kernels Revisited
Kernel methods have been widely applied to machine learning and other
questions of approximating an unknown function from its finite sample data. To
ensure arbitrary accuracy of such approximation, various denseness conditions
are imposed on the selected kernel. This note contributes to the study of
universal, characteristic, and -universal kernels. We first give simple
and direct description of the difference and relation among these three kinds
of universalities of kernels. We then focus on translation-invariant and
weighted polynomial kernels. A simple and shorter proof of the known
characterization of characteristic translation-invariant kernels will be
presented. The main purpose of the note is to give a delicate discussion on the
universalities of weighted polynomial kernels
Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression
A typical approach in estimating the learning rate of a regularized learning
scheme is to bound the approximation error by the sum of the sampling error,
the hypothesis error and the regularization error. Using a reproducing kernel
space that satisfies the linear representer theorem brings the advantage of
discarding the hypothesis error from the sum automatically. Following this
direction, we illustrate how reproducing kernel Banach spaces with the l1 norm
can be applied to improve the learning rate estimate of l1-regularization in
machine learning
Vector-valued Reproducing Kernel Banach Spaces with Group Lasso Norms
Aiming at a mathematical foundation for kernel methods in coefficient
regularization for multi-task learning, we investigate theory of vector-valued
reproducing kernel Banach spaces (RKBS) with L_{p,1}-norms, which contains the
sparse learning scheme and the group lasso p=2. We construct RKBSs that
are equipped with such group lasso norms and admit the linear representer
theorem for regularized learning schemes. The corresponding kernels that are
admissible for the construction are discussed
Existence of the Bedrosian Identity for Singular Integral Operators
The Hilbert transform satisfies the Bedrosian identity
whenever the supports of the Fourier transforms of are
respectively contained in and , . Attracted by this interesting result arising from the
time-frequency analysis, we investigate the existence of such an identity for a
general bounded singular integral operator on and for general
support sets and . A geometric characterization of the support sets for
the existence of the Bedrosian identity is established. Moreover, the support
sets for the partial Hilbert transforms are all found. In particular, for the
Hilbert transform to satisfy the Bedrosian identity, the support sets must be
given as above
Multi-task Learning in Vector-valued Reproducing Kernel Banach Spaces with the Norm
Targeting at sparse multi-task learning, we consider regularization models
with an penalty on the coefficients of kernel functions. In order to
provide a kernel method for this model, we construct a class of vector-valued
reproducing kernel Banach spaces with the norm. The notion of
multi-task admissible kernels is proposed so that the constructed spaces could
have desirable properties including the crucial linear representer theorem.
Such kernels are related to bounded Lebesgue constants of a kernel
interpolation question. We study the Lebesgue constant of multi-task kernels
and provide examples of admissible kernels. Furthermore, we present numerical
experiments for both synthetic data and real-world benchmark data to
demonstrate the advantages of the proposed construction and regularization
models
Vector-valued Reproducing Kernel Banach Spaces with Applications to Multi-task Learning
Motivated by multi-task machine learning with Banach spaces, we propose the
notion of vector-valued reproducing kernel Banach spaces (RKBS). Basic
properties of the spaces and the associated reproducing kernels are
investigated. We also present feature map constructions and several concrete
examples of vector-valued RKBS. The theory is then applied to multi-task
machine learning. Especially, the representer theorem and characterization
equations for the minimizer of regularized learning schemes in vector-valued
RKBS are established