2,519 research outputs found

    Application of Mean Normalized Stochastic Gradient Descent for Speech Recognition

    Get PDF
    Umělé neuronové sítě jsou v posledních letech na vzestupu. Jednou z možných optimalizačních technik je mean-normalized stochastic gradient descent, který navrhli Wiesler a spol. [1]. Tato práce dále vysvětluje a zkoumá tuto metodu na problému klasifikace fonémů. Ne všechny závěry Wieslera a spol. byly potvrzeny. Mean-normalized SGD je vhodné použít pouze pokud je síť dostatečně velká, nepříliš hluboká a pracuje-li se sigmoidou jako nelineárním prvkem. V ostatních případech mean-normalized SGD mírně zhoršuje výkon neuronové sítě. Proto nemůže být doporučena jako obecná optimalizační technika. [1] Simon Wiesler, Alexander Richard, Ralf Schluter, and Hermann Ney. Mean-normalized stochastic gradient for large-scale deep learning. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 180{184. IEEE, 2014.The artificial neural networks are on the rise in recent years. One possible optimization technique is mean-normalized stochastic gradient descent recently proposes by Wiesler et al. [1]. This work further explains and examines this method on phoneme classification task. Not all findings of Wiesler et al. can be confirmed. The mean-normalized SGD is helpful only if the network is large enough (but not too deep) and if the sigmoid non-linear function is used. Otherwise, the mean-normalized SGD slightly impairs the network performance and therefore cannot be recommended as a general optimization technique. [1] Simon Wiesler, Alexander Richard, Ralf Schluter, and Hermann Ney. Mean-normalized stochastic gradient for large-scale deep learning. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 180{184. IEEE, 2014.

    Training Neural Networks with Stochastic Hessian-Free Optimization

    Full text link
    Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with gradient and curvature mini-batches independent of the dataset size. We modify Martens' HF for these settings and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. Stochastic Hessian-free optimization gives an intermediary between SGD and HF that achieves competitive performance on both classification and deep autoencoder experiments.Comment: 11 pages, ICLR 201
    corecore