326 research outputs found

    Convergence of RProp and variants

    Get PDF
    This paper examines conditions under which the Resilient Propagation-Rprop algorithm fails to converge, identifies limitations of the so-called Globally Convergent Rprop-GRprop algorithm which was previously thought to guarantee convergence, and considers pathological behaviour of the implementation of GRprop in the neuralnet software package. A new robust convergent backpropagation-ARCprop algorithm is presented. The new algorithm builds on Rprop, but guarantees convergence by shortening steps as necessary to achieve a sufficient reduction in global error. Simulation results on four benchmark problems from the PROBEN1 collection show that the new algorithm achieves similar levels of performance to Rprop in terms of training speed, training accuracy, and generalization

    Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process

    Get PDF
    In this paper a globally convergent first-order training algorithm is proposed that uses sign-based information of the batch error measure in the framework of the nonlinear Jacobi process. This approach allows us to equip the recently proposed Jacobi–Rprop method with the global convergence property, i.e. convergence to a local minimizer from any initial starting point. We also propose a strategy that ensures the search direction of the globally convergent Jacobi–Rprop is a descent one. The behaviour of the algorithm is empirically investigated in eight benchmark problems. Simulation results verify that there are indeed improvements on the convergence success of the algorithm

    ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

    Full text link
    Two major momentum-based techniques that have achieved tremendous success in optimization are Polyak's heavy ball method and Nesterov's accelerated gradient. A crucial step in all momentum-based methods is the choice of the momentum parameter mm which is always suggested to be set to less than 11. Although the choice of m<1m < 1 is justified only under very strong theoretical assumptions, it works well in practice even when the assumptions do not necessarily hold. In this paper, we propose a new momentum based method ADINE\textit{ADINE}, which relaxes the constraint of m<1m < 1 and allows the learning algorithm to use adaptive higher momentum. We motivate our hypothesis on mm by experimentally verifying that a higher momentum (1\ge 1) can help escape saddles much faster. Using this motivation, we propose our method ADINE\textit{ADINE} that helps weigh the previous updates more (by setting the momentum parameter >1> 1), evaluate our proposed algorithm on deep neural networks and show that ADINE\textit{ADINE} helps the learning algorithm to converge much faster without compromising on the generalization error.Comment: 8 + 1 pages, 12 figures, accepted at CoDS-COMAD 201

    Macroscopic Traffic Model Validation of Large Networks and the Introduction of a Gradient Based Solver

    Get PDF
    Traffic models are important for the evaluation of various Intelligent Transport Systems and the development of new traffic infrastructure. In order for this to be done accurately and with confidence the correct parameter values of the model must be identified. The focus of this thesis is the identification and confirmation of these parameters, which is model validation. Validation is performed on two different models; the first-order CTM and the second-order METANET model. The CTM is validated for two UK sites of 7.8 and 21.9 km and METANET for the same two sites using a variety of meta-heuristic algorithms. This is done using a newly developed method to allow for the optimisation method to determine the number of parameters to be used and the spatial extent of their application. This allows for the removal of expert engineering knowledge and ad-hoc decomposition of networks. This thesis also develops a methodology by use of Automatic Differentiation to allow gradient based optimisation to be used. This approach successfully validated the METANET model for the 21.9 km site and also a large network surrounding the city of Manchester of 186.9 km. This proves that gradient based optimisation can be used for the macroscopic traffic model validation problem. In fact the performance of the developed gradient method is superior to the meta-heuristics tested for the same sites. The methodology defined also allows for more data to be obtained from the model such as its Jacobian and the sensitivity of the objective function being used relative to the individual parameters. Space-Time contour plots of this newly acquired data show structures and shock waves that are not visible in the mean speed contour diagrams

    Evolutionary optimization of sparsely connected and time-lagged neural networks for time series forecasting

    Get PDF
    Time Series Forecasting (TSF) is an important tool to support decision mak- ing (e.g., planning production resources). Artificial Neural Networks (ANN) are innate candidates for TSF due to advantages such as nonlinear learn- ing and noise tolerance. However, the search for the best model is a complex task that highly affects the forecasting performance. In this work, we propose two novel Evolutionary Artificial Neural Networks (EANN) approaches for TSF based on an Estimation Distribution Algorithm (EDA) search engine. The first new approach consist of Sparsely connected Evolutionary ANN (SEANN), which evolves more flexible ANN structures to perform multi-step ahead forecasts. The second one, consists of an automatic Time lag feature selection EANN (TEANN) approach that evolves not only ANN parameters (e.g., input and hidden nodes, training parameters) but also which set of time lags are fed into the forecasting model. Several experiments were held, using a set of six time series, from different real-world domains. Also, two error metrics (i.e., Mean Squared Error and Symmetric Mean Absolute Per- centage Error) were analyzed. The two EANN approaches were compared against a base EANN (with no ANN structure or time lag optimization) and four other methods (Autoregressive Integrated Moving Average method, Random Forest, Echo State Network and Support Vector Machine). Overall, the proposed SEANN and TEANN methods obtained the best forecasting results. Moreover, they favor simpler neural network models, thus requiring less computational effort when compared with the base EANN.The research reported here has been supported by the Spanish Ministry of Science and Innovation under project TRA2010-21371-C03-03 and FCT - Fundacao para a Ciencia e Tecnologia within the Project Scope PEst- OE/EEI/UI0319/2014. The authors want to thank specially Martin Stepnicka and Lenka Vavrickova for all their help. The authors also want to thank Ramon Sagarna for introducing the subject of EDA

    A comparison of feed-forward and recurrent neural networks in time series forecasting

    Get PDF
    corecore