We introduce a proximal version of the stochastic dual coordinate ascent
method and show how to accelerate the method using an inner-outer iteration
procedure. We analyze the runtime of the framework and obtain rates that
improve state-of-the-art results for various key machine learning optimization
problems including SVM, logistic regression, ridge regression, Lasso, and
multiclass SVM. Experiments validate our theoretical findings