3 research outputs found
Regularized Regression Problem in hyper-RKHS for Learning Kernels
This paper generalizes the two-stage kernel learning framework, illustrates
its utility for kernel learning and out-of-sample extensions, and proves
{asymptotic} convergence results for the introduced kernel learning model.
Algorithmically, we extend target alignment by hyper-kernels in the two-stage
kernel learning framework. The associated kernel learning task is formulated as
a regression problem in a hyper-reproducing kernel Hilbert space (hyper-RKHS),
i.e., learning on the space of kernels itself. To solve this problem, we
present two regression models with bivariate forms in this space, including
kernel ridge regression (KRR) and support vector regression (SVR) in the
hyper-RKHS. By doing so, it provides significant model flexibility for kernel
learning with outstanding performance in real-world applications. Specifically,
our kernel learning framework is general, that is, the learned underlying
kernel can be positive definite or indefinite, which adapts to various
requirements in kernel learning. Theoretically, we study the convergence
behavior of these learning algorithms in the hyper-RKHS and derive the learning
rates. Different from the traditional approximation analysis in RKHS, our
analyses need to consider the non-trivial independence of pairwise samples and
the characterisation of hyper-RKHS. To the best of our knowledge, this is the
first work in learning theory to study the approximation performance of
regularized regression problem in hyper-RKHS.Comment: 25 pages, 3 figure
Nonlinear Pairwise Layer and Its Training for Kernel Learning
Kernel learning is a fundamental technique that has been intensively studied in the past decades. For the complicated practical tasks, the traditional "shallow" kernels (e.g., Gaussian kernel and sigmoid kernel) are not flexible enough to produce satisfactory performance. To address this shortcoming, this paper introduces a nonlinear layer in kernel learning to enhance the model flexibility. This layer is pairwise, which fully considers the coupling information among examples. So our model contains a fixed single mapping layer (i.e. a Gaussian kernel) as well as a nonlinear pairwise layer, thereby achieving better flexibility than the existing kernel structures. Moreover, the proposed structure can be seamlessly embedded to Support Vector Machines (SVM), of which the training process can be formulated as a joint optimization problem including nonlinear function learning and standard SVM optimization. We theoretically prove that the objective function is gradient-Lipschitz continuous, which further guides us how to accelerate the optimization process in a deep kernel architecture. Experimentally, we find that the proposed structure outperforms other state-ofthe-art kernel-based algorithms on various benchmark datasets, and thus the effectiveness of the incorporated pairwise layer with its training approach is demonstrated