176 research outputs found

    Efficient Algorithms and Error Analysis for the Modified Nystrom Method

    Full text link
    Many kernel methods suffer from high time and space complexities and are thus prohibitive in big-data applications. To tackle the computational challenge, the Nystr\"om method has been extensively used to reduce time and space complexities by sacrificing some accuracy. The Nystr\"om method speedups computation by constructing an approximation of the kernel matrix using only a few columns of the matrix. Recently, a variant of the Nystr\"om method called the modified Nystr\"om method has demonstrated significant improvement over the standard Nystr\"om method in approximation accuracy, both theoretically and empirically. In this paper, we propose two algorithms that make the modified Nystr\"om method practical. First, we devise a simple column selection algorithm with a provable error bound. Our algorithm is more efficient and easier to implement than and nearly as accurate as the state-of-the-art algorithm. Second, with the selected columns at hand, we propose an algorithm that computes the approximation in lower time complexity than the approach in the previous work. Furthermore, we prove that the modified Nystr\"om method is exact under certain conditions, and we establish a lower error bound for the modified Nystr\"om method.Comment: 9-page paper plus appendix. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 3

    A perturbation based out-of-sample extension framework

    Full text link
    Out-of-sample extension is an important task in various kernel based non-linear dimensionality reduction algorithms. In this paper, we derive a perturbation based extension framework by extending results from classical perturbation theory. We prove that our extension framework generalizes the well-known Nystr{\"o}m method as well as some of its variants. We provide an error analysis for our extension framework, and suggest new forms of extension under this framework that take advantage of the structure of the kernel matrix. We support our theoretical results numerically and demonstrate the advantages of our extension framework both on synthetic and real data.Comment: 22 pages, 9 figure

    Less is More: Nystr\"om Computational Regularization

    Get PDF
    We study Nystr\"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr\"om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.Comment: updated version of NIPS 2015 (oral
    corecore