63 research outputs found

    Stability analysis of recurrent neural networks using dissipativity

    Get PDF
    The purpose of this work is to describe how dissipativity theory can be used for the stability analysis of discrete-time recurrent neural networks and to propose a training algorithm for producing stable networks. Using dissipativity theory, we have found conditions for the globally asymptotic stability of equilibrium points of Layered Digital Dynamic Networks (LDDNs), a very general class of recurrent neural networks. The LDDNs are transformed into a standard interconnected system structure, and a fundamental theorem describing the stability of interconnected dissipative systems is applied. The theorem leads to several new sufficient conditions for the stability of equilibrium points for LDDNs. These conditions are demonstrated on several test problems and compared to previously proposed stability conditions. From these novel stability criteria, we propose a new algorithm to train stable recurrent neural networks. The standard mean square error performance index is modified to include stability criteria. This requires computation of the derivative of the maximum eigenvalue of a matrix with respect to neural network weights. The new training algorithm is tested on two examples of neural network-based model reference control systems, including a magnetic levitation system

    Stability and dissipativity analysis of static neural networks with time delay

    Get PDF
    This paper is concerned with the problems of stability and dissipativity analysis for static neural networks (NNs) with time delay. Some improved delay-dependent stability criteria are established for static NNs with time-varying or time-invariant delay using the delay partitioning technique. Based on these criteria, several delay-dependent sufficient conditions are given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Some examples are given to illustrate the effectiveness and reduced conservatism of the proposed results.published_or_final_versio

    Recurrent Neural Networks: Error Surface Analysis and Improved Training

    Get PDF
    Recurrent neural networks (RNNs) have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. One of the reasons that makes RNN training, especially using batch, gradient-based methods, difficult is the existence of spurious valleys in the error surface. In this work, a mathematical framework was developed to analyze the spurious valleys that appear in most practical RNN architectures, no matter their size. The insights gained from this analysis suggested a new procedure for improving the training process. The procedure uses a batch training method based on a modified version of the Levenberg-Marquardt algorithm. This new procedure mitigates the effects of spurious valleys in the error surface of RNNs. The results on a variety of test problems show that the new procedure is consistently better than existing training algorithms (both batch and stochastic) for training RNNs. Also, a framework for neural network controllers based on the model reference adaptive control (MRAC) architecture was developed. This architecture has been used before, but the difficulties in training RNNs have limited its use. The new training procedures have made MRAC more attractive. The updated MRAC framework is very flexible, and incorporates disturbance rejection, regulation and tracking. The simulation and testing results on several real systems show that this type of neural control is well-suited for highly nonlinear plants.Electrical Engineerin

    Control of Large Actuator Arrays Using Pattern-Forming Systems

    Get PDF
    Pattern-forming systems are used to model many diverse phenomena from biology,chemistry and physics. These systems of differential equations havethe property that as a bifurcation (or control) parameter passes through acritical value, a stable spatially uniform equilibrium state gives way to astable pattern state, which may have spatial variation, time variation, orboth. There is a large body of experimental and mathematical work on pattern-forming systems.However, these ideas have not yet been adequately exploited inengineering, particularly in the control of smart systems; i.e.,feedback systems having large numbers of actuators and sensors. With dramatic recent improvements in micro-actuator and micro-sensortechnology, there is a need for control schemes betterthan the conventional approach of reading out all of the sensor informationto a computer, performing all the necessary computations in a centralizedfashion, and then sending out commands to each individual actuator.Potential applications for large arrays of micro-actuators includeadaptive optics (in particular, micromirror arrays), suppressingturbulence and vortices in fluid boundary-layers, micro-positioning smallparts, and manipulating small quantities of chemical reactants. The main theoretical result presented is a Lyapunov functional for thecubic nonlinearity activator-inhibitor model pattern-forming system.Analogous Lyapunov functionals then follow for certain generalizations ofthe basic cubic nonlinearity model. One such generalization is a complex activator-inhibitor equation which, under suitable hypotheses,models the amplitude and phase evolution in the continuum limitof a network of coupled van der Pol oscillators, coupled to a network of resonant circuits, with an external oscillating input. Potentialapplications for such coupled van der Pol oscillator networks includequasi-optical power combining and phased-array antennas. In addition to the Lyapunov functional, a Lyapunov function for the truncated modal dynamics is derived, and the Lyapunov functional isalso used to analyze the stability of certain equilibria. Basic existence, uniqueness, regularity, and dissipativity properties ofsolutions are also verified, engineering realizations of the dynamicsare discussed, and finally, some of the potential applications areexplored

    Proto-neurons from abiotic polypeptides

    Get PDF
    To understand the origins of life, we must first gain a grasp of the unresolved emergence of the first informational polymers and cell-like assemblies that developed into living systems. Heating amino acid mixtures to their boiling point produces thermal proteins that self-assemble into membrane-bound protocells, offering a compelling abiogenic route for forming polypeptides. Recent research has revealed the presence of electrical excitability and signal processing capacities in proteinoids, indicating the possibility of primitive cognitive functions and problem-solving capabilities. This review examines the characteristics exhibited by proteinoids, including electrical activity and self-assembly properties, exploring the possible roles of such polypeptides under prebiotic conditions in the emergence of early biomolecular complexity. Experiments showcasing the possibility of unconventional computing with proteinoids as well as modelling proteinoid assemblies into synthetic proto-brains are given. Proteinoids’ robust abiogenic production, biomimetic features, and computational capability shed light on potential phases in the evolution of polypeptides and primitive life from the primordial environment
    • …
    corecore