44 research outputs found
Generalization Properties of Doubly Stochastic Learning Algorithms
Doubly stochastic learning algorithms are scalable kernel methods that
perform very well in practice. However, their generalization properties are not
well understood and their analysis is challenging since the corresponding
learning sequence may not be in the hypothesis space induced by the kernel. In
this paper, we provide an in-depth theoretical analysis for different variants
of doubly stochastic learning algorithms within the setting of nonparametric
regression in a reproducing kernel Hilbert space and considering the square
loss. Particularly, we derive convergence results on the generalization error
for the studied algorithms either with or without an explicit penalty term. To
the best of our knowledge, the derived results for the unregularized variants
are the first of this kind, while the results for the regularized variants
improve those in the literature. The novelties in our proof are a sample error
bound that requires controlling the trace norm of a cumulative operator, and a
refined analysis of bounding initial error.Comment: 24 pages. To appear in Journal of Complexit
Scalable large margin pairwise learning algorithms
2019 Summer.Includes bibliographical references.Classification is a major task in machine learning and data mining applications. Many of these applications involve building a classification model using a large volume of imbalanced data. In such an imbalanced learning scenario, the area under the ROC curve (AUC) has proven to be a reliable performance measure to evaluate a classifier. Therefore, it is desirable to develop scalable learning algorithms that maximize the AUC metric directly. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines. However, the computational cost of the kernelized machines hinders their scalability. To address this problem, we propose a large-scale nonlinear AUC maximization algorithm that learns a batch linear classifier on approximate feature space computed via the k-means Nyström method. The proposed algorithm is shown empirically to achieve comparable AUC classification performance or even better than the kernel AUC machines, while its training time is faster by several orders of magnitude. However, the computational complexity of the linear batch model compromises its scalability when training sizable datasets. Hence, we develop a second-order online AUC maximization algorithms based on a confidence-weighted model. The proposed algorithms exploit the second-order information to improve the convergence rate and implement a fixed-size buffer to address the multivariate nature of the AUC objective function. We also extend our online linear algorithms to consider an approximate feature map constructed using random Fourier features in an online setting. The results show that our proposed algorithms outperform or are at least comparable to the competing online AUC maximization methods. Despite their scalability, we notice that online first and second-order AUC maximization methods are prone to suboptimal convergence. This can be attributed to the limitation of the hypothesis space. A potential improvement can be attained by learning stochastic online variants. However, the vanilla stochastic methods also suffer from slow convergence because of the high variance introduced by the stochastic process. We address the problem of slow convergence by developing a fast convergence stochastic AUC maximization algorithm. The proposed stochastic algorithm is accelerated using a unique combination of scheduled regularization update and scheduled averaging. The experimental results show that the proposed algorithm performs better than the state-of-the-art online and stochastic AUC maximization methods in terms of AUC classification accuracy. Moreover, we develop a proximal variant of our accelerated stochastic AUC maximization algorithm. The proposed method applies the proximal operator to the hinge loss function. Therefore, it evaluates the gradient of the loss function at the approximated weight vector. Experiments on several benchmark datasets show that our proximal algorithm converges to the optimal solution faster than the previous AUC maximization algorithms
Pairwise Learning via Stagewise Training in Proximal Setting
The pairwise objective paradigms are an important and essential aspect of
machine learning. Examples of machine learning approaches that use pairwise
objective functions include differential network in face recognition, metric
learning, bipartite learning, multiple kernel learning, and maximizing of area
under the curve (AUC). Compared to pointwise learning, pairwise learning's
sample size grows quadratically with the number of samples and thus its
complexity. Researchers mostly address this challenge by utilizing an online
learning system. Recent research has, however, offered adaptive sample size
training for smooth loss functions as a better strategy in terms of convergence
and complexity, but without a comprehensive theoretical study. In a distinct
line of research, importance sampling has sparked a considerable amount of
interest in finite pointwise-sum minimization. This is because of the
stochastic gradient variance, which causes the convergence to be slowed
considerably. In this paper, we combine adaptive sample size and importance
sampling techniques for pairwise learning, with convergence guarantees for
nonsmooth convex pairwise loss functions. In particular, the model is trained
stochastically using an expanded training set for a predefined number of
iterations derived from the stability bounds. In addition, we demonstrate that
sampling opposite instances at each iteration reduces the variance of the
gradient, hence accelerating convergence. Experiments on a broad variety of
datasets in AUC maximization confirm the theoretical results.Comment: 10 Page