Optimisation of the predictive ability of artificial neural network (ANN) models: A comparison of three ANN programs and four classes of training algorithm.

Abstract

NoThe purpose of this study was to determine whether artificial neural network (ANN) programs implementing different backpropagation algorithms and default settings are capable of generating equivalent highly predictive models. Three ANN packages were used: INForm, CAD/Chem and MATLAB. Twenty variants of gradient descent, conjugate gradient, quasi-Newton and Bayesian regularisation algorithms were used to train networks containing a single hidden layer of 3¿12 nodes. All INForm and CAD/Chem models trained satisfactorily for tensile strength, disintegration time and percentage dissolution at 15, 30, 45 and 60 min. Similarly, acceptable training was obtained for MATLAB models using Bayesian regularisation. Training of MATLAB models with other algorithms was erratic. This effect was attributed to a tendency for the MATLAB implementation of the algorithms to attenuate training in local minima of the error surface. Predictive models for tablet capping and friability could not be generated. The most predictive models from each ANN package varied with respect to the optimum network architecture and training algorithm. No significant differences were found in the predictive ability of these models. It is concluded that comparable models are obtainable from different ANN programs provided that both the network architecture and training algorithm are optimised. A broad strategy for optimisation of the predictive ability of an ANN model is proposed

Similar works

Full text

thumbnail-image

Bradford Scholars

redirect

This paper was published in Bradford Scholars.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.