12,621 research outputs found
M-Power Regularized Least Squares Regression
Regularization is used to find a solution that both fits the data and is
sufficiently smooth, and thereby is very effective for designing and refining
learning algorithms. But the influence of its exponent remains poorly
understood. In particular, it is unclear how the exponent of the reproducing
kernel Hilbert space~(RKHS) regularization term affects the accuracy and the
efficiency of kernel-based learning algorithms. Here we consider regularized
least squares regression (RLSR) with an RKHS regularization raised to the power
of m, where m is a variable real exponent. We design an efficient algorithm for
solving the associated minimization problem, we provide a theoretical analysis
of its stability, and we compare its advantage with respect to computational
complexity, speed of convergence and prediction accuracy to the classical
kernel ridge regression algorithm where the regularization exponent m is fixed
at 2. Our results show that the m-power RLSR problem can be solved efficiently,
and support the suggestion that one can use a regularization term that grows
significantly slower than the standard quadratic growth in the RKHS norm
Equivalence of Learning Algorithms
The purpose of this paper is to introduce a concept of equivalence between machine learning algorithms. We define two notions of algorithmic equivalence, namely, weak and strong equivalence. These notions are of paramount importance for identifying when learning prop erties from one learning algorithm can be transferred to another. Using regularized kernel machines as a case study, we illustrate the importance of the introduced equivalence concept by analyzing the relation between kernel ridge regression (KRR) and m-power regularized least squares regression (M-RLSR) algorithms
Distributed Kernel Regression: An Algorithm for Training Collaboratively
This paper addresses the problem of distributed learning under communication
constraints, motivated by distributed signal processing in wireless sensor
networks and data mining with distributed databases. After formalizing a
general model for distributed learning, an algorithm for collaboratively
training regularized kernel least-squares regression estimators is derived.
Noting that the algorithm can be viewed as an application of successive
orthogonal projection algorithms, its convergence properties are investigated
and the statistical behavior of the estimator is discussed in a simplified
theoretical setting.Comment: To be presented at the 2006 IEEE Information Theory Workshop, Punta
del Este, Uruguay, March 13-17, 200
- …