2,352 research outputs found

    Efficient Semidefinite Spectral Clustering via Lagrange Duality

    Full text link
    We propose an efficient approach to semidefinite spectral clustering (SSC), which addresses the Frobenius normalization with the positive semidefinite (p.s.d.) constraint for spectral clustering. Compared with the original Frobenius norm approximation based algorithm, the proposed algorithm can more accurately find the closest doubly stochastic approximation to the affinity matrix by considering the p.s.d. constraint. In this paper, SSC is formulated as a semidefinite programming (SDP) problem. In order to solve the high computational complexity of SDP, we present a dual algorithm based on the Lagrange dual formalization. Two versions of the proposed algorithm are proffered: one with less memory usage and the other with faster convergence rate. The proposed algorithm has much lower time complexity than that of the standard interior-point based SDP solvers. Experimental results on both UCI data sets and real-world image data sets demonstrate that 1) compared with the state-of-the-art spectral clustering methods, the proposed algorithm achieves better clustering performance; and 2) our algorithm is much more efficient and can solve larger-scale SSC problems than those standard interior-point SDP solvers.Comment: 13 page

    Learning with SGD and Random Features

    Get PDF
    Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms. In this paper, we investigate their application in the context of nonparametric statistical learning. More precisely, we study the estimator defined by stochastic gradient with mini batches and random features. The latter can be seen as form of nonlinear sketching and used to define approximate kernel methods. The considered estimator is not explicitly penalized/constrained and regularization is implicit. Indeed, our study highlights how different parameters, such as number of features, iterations, step-size and mini-batch size control the learning properties of the solutions. We do this by deriving optimal finite sample bounds, under standard assumptions. The obtained results are corroborated and illustrated by numerical experiments

    Doubly Stochastic Variational Inference for Deep Gaussian Processes

    Get PDF
    Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to over-fitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalisations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm, which does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression.Comment: NIPS 201
    • …
    corecore