1 research outputs found

    Sublinear dual coordinate ascent for regularized loss minimization

    Full text link
    © 2016 IEEE. We present a sublinear version of the dual coordinate ascent method for solving a group of regularized loss minimization problems in machine learning. The proposed method seamlessly integrates sampling techniques, the dual coordinate ascent method, and a multiplicative update algorithm. The sampling techniques choose the "expected" examples, and estimate the corresponding inner products. The dual coordinate ascent method generates an updated iterative step, which outperforms the time-learning step used in the previous sublinear perceptron algorithm. The multiplicative update algorithm updates the example weighting. The proposed method is implemented with an iterative step of order O(log(n)), where n is the size of examples, and achieves a better result than other methods, with high probability. We present a theoretical analysis of the sublinear iterative in order to justify its benefits. We then apply the proposed optimization method to support vector machine and conduct experiments on three large-scale datasets. Our experimental results validate our theoretical findings
    corecore