Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions

Abstract

Classical stochastic gradient methods are well suited for minimizing expected-value ob-jective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., problems of the form minxEv f

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 30/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.