24 research outputs found
Distributed Mini-Batch SDCA
We present an improved analysis of mini-batched stochastic dual coordinate
ascent for regularized empirical loss minimization (i.e. SVM and SVM-type
objectives). Our analysis allows for flexible sampling schemes, including where
data is distribute across machines, and combines a dependence on the smoothness
of the loss and/or the data spread (measured through the spectral norm)
Primal-Dual Rates and Certificates
We propose an algorithm-independent framework to equip existing optimization
methods with primal-dual certificates. Such certificates and corresponding rate
of convergence guarantees are important for practitioners to diagnose progress,
in particular in machine learning applications. We obtain new primal-dual
convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group
Lasso and TV-regularized problems. The theory applies to any norm-regularized
generalized linear model. Our approach provides efficiently computable duality
gaps which are globally defined, without modifying the original problems in the
region of interest.Comment: appearing at ICML 2016 - Proceedings of the 33rd International
Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 4