Equilibrium characterization for a class of dynamical neural networks with applications to learning and synthesis.

Abstract

There has been a considerable amount of interest in the application of neural networks to information processing problems in the recent past. The computational capabilities of neural networks stem from a massively parallel, dense interconnection of simple nonlinear elements. In this dissertation, a class of dynamical neural networks which has received wide attention is investigated for its general computational capabilities. This is achieved by considering the design of the network in various application scenarios, viz. quadratic minimization, associative memory and nonlinear input-output mapping. The design of the network for each application is facilitated by a qualitative analysis of the properties of the equilibrium points of the neural network whose elements are appropriately tailored for the specific application. Two different design methodologies, learning and synthesis, are addressed. The equilibrium characterization studies conducted yield specific results regarding the equilibrium points: degree of exponential stability, estimation of regions of attraction and conditions for confining them in certain regions of the state-space. The synthesis procedure developed utilizing these results for the employment of the network to perform quadratic minimization guarantees a unique equilibrium point. It is shown that the speed of computation can be increased by adjusting certain parameters of the network and is independent of the problem size. The synthesis of the associative memory network is carried out by a proper tailoring of the neuronal activation functions to satisfy certain stability requirements and by using an interconnection structure that is not necessarily symmetric. Obtaining valuable insights from the results of the equilibrium characterization, a simple and efficient learning rule for the interconnection structure is also devised. Convergence properties of the learning rule are established, and guidelines for selecting the initial values and the adaptation step size parameters are provided. This learning rule is extended to a novel three layer neural network architecture that functions as a nonlinear input-output mapper. The feasibility of the developed learning rules and synthesis procedures are demonstrated through a number of applications, viz. parameter estimation and state estimation in linear systems, design of a class of pattern recognition filters, storage of specific pattern vectors, and nonlinear system identification

    Similar works