Statistical and Computational Trade-Offs in Kernel K-Means

Abstract

We investigate the efficiency of k-means in terms of both statistical and computational requirements. More precisely, we study a Nystrom approach to kernel k-means. We analyze the statistical properties of the proposed method and show that it achieves the same accuracy of exact kernel k-means with only a fraction of computations. Indeed, we prove under basic assumptions that sampling oot pn Nystrom landmarks allows to greatly reduce computational costs without incurring in any loss of accuracy. To the best of our knowledge this is the first result of this kind for unsupervised learning

    Similar works