884 research outputs found

    Proximal Stochastic Newton-type Gradient Descent Methods for Minimizing Regularized Finite Sums

    Full text link
    In this work, we generalized and unified recent two completely different works of Jascha \cite{sohl2014fast} and Lee \cite{lee2012proximal} respectively into one by proposing the \textbf{prox}imal s\textbf{to}chastic \textbf{N}ewton-type gradient (PROXTONE) method for optimizing the sums of two convex functions: one is the average of a huge number of smooth convex functions, and the other is a non-smooth convex function. While a set of recently proposed proximal stochastic gradient methods, include MISO, Prox-SDCA, Prox-SVRG, and SAG, converge at linear rates, the PROXTONE incorporates second order information to obtain stronger convergence results, that it achieves a linear convergence rate not only in the value of the objective function, but also in the \emph{solution}. The proof is simple and intuitive, and the results and technique can be served as a initiate for the research on the proximal stochastic methods that employ second order information.Comment: arXiv admin note: text overlap with arXiv:1309.2388, arXiv:1403.4699 by other author
    • …
    corecore