1 research outputs found

    Speed up training of the recurrent neural network based on constrained optimization techniques

    No full text
    In this paper, the constrained optimization technique for a substantial problem is explored, that is accelerating training the globally recurrent neural network. Unlike most of the previous methods in feedforward neural networks, the authors adopt the constrained optimization technique to improve the gradient-based algorithm of the globally recurrent neural network for the adaptive learning rate during training. Using the recurrent network with the improved algorithm, some experiments in two real-world problems, namely, filtering additive noises in acoustic data and classification of temporal signals for speaker identification, have been performed. The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance.EI中国科学引文数据库(CSCD)06581-5881
    corecore