4 research outputs found
Approximation by non-symmetric networks for cross-domain learning
For the past 30 years or so, machine learning has stimulated a great deal of
research in the study of approximation capabilities (expressive power) of a
multitude of processes, such as approximation by shallow or deep neural
networks, radial basis function networks, and a variety of kernel based
methods. Motivated by applications such as invariant learning, transfer
learning, and synthetic aperture radar imaging, we initiate in this paper a
general approach to study the approximation capabilities of kernel based
networks using non-symmetric kernels. While singular value decomposition is a
natural instinct to study such kernels, we consider a more general approach to
include the use of a family of kernels, such as generalized translation
networks (which include neural networks and translation invariant kernels as
special cases) and rotated zonal function kernels. Naturally, unlike
traditional kernel based approximation, we cannot require the kernels to be
positive definite. In particular, we obtain estimates on the accuracy of
uniform approximation of functions in a ()-Sobolev class by ReLU
networks when is not necessarily an integer. Our general results apply to
the approximation of functions with small smoothness compared to the dimension
of the input space