Boundedness of weighted coefficients of perceptron learning algorithm and global convergence of fixed point and limit cycle behaviors

Abstract

In this paper, a condition for the boundedness of weighted coefficients of the perceptron with arbitrary initial weights and an arbitrary set of bounded training feature vectors is given and proved. Based on this derived condition, conditions for the global convergence of the output of the perceptron with a set of nonlinearly separable bounded training feature vectors to limit cycles are given and proved, and the maximum number of updates of the weighted coefficients of the perceptron before the output of the perceptron reaches the limit cycles is given. Finally, the perceptron with periodically time varying weighted coefficients is investigated. An optimization approach is proposed for the design of this perceptron. Numerical computer simulation results show that the perceptron with periodically time varying weighted coefficients could achieve better recognition performances compared to that with only one set of weighted coefficients

Similar works

Full text

thumbnail-image

University of Lincoln Institutional Repository

redirect
Last time updated on 28/06/2012

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.