1 research outputs found

    Stochastic Descent Analysis of Representation Learning Algorithms

    Full text link
    Although stochastic approximation learning methods have been widely used in the machine learning literature for over 50 years, formal theoretical analyses of specific machine learning algorithms are less common because stochastic approximation theorems typically possess assumptions which are difficult to communicate and verify. This paper presents a new stochastic approximation theorem for state-dependent noise with easily verifiable assumptions applicable to the analysis and design of important deep learning algorithms including: adaptive learning, contrastive divergence learning, stochastic descent expectation maximization, and active learning.Comment: Version: April 27, 2015. This paper has been withdrawn by the author because of a minor problem with the proof which has since been corrected. The revised manuscript will eventually be publishe
    corecore