Applying Bayesian Regularization for Acceleration of Levenberg-Marquardt based Neural Network Training

Abstract

Neural network is widely used for image classification problems, and is proven to be effective with high successful rate. However one of its main challenges is the significant amount of time it takes to train the network. The goal of this research is to improve the neural network training algorithms and apply and test them in classification and recognition problems. In this paper, we describe a method of applying Bayesian regularization to improve Levenberg-Marquardt (LM) algorithm and make it better usable in training neural networks. In the experimental part, we qualify the modified LM algorithm using Bayesian regularization and use it to determine an appropriate number of hidden layers in the network to avoid overtraining. The result of the experiment was very encouraging with a 98.8% correct classification when run on test samples

Similar works

Full text

thumbnail-image

Directory of Open Access Journals

redirect
Last time updated on 16/06/2018

This paper was published in Directory of Open Access Journals.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.