2 research outputs found

    Multi-almost periodicity and invariant basins of general neural networks under almost periodic stimuli

    Full text link
    In this paper, we investigate convergence dynamics of 2N2^N almost periodic encoded patterns of general neural networks (GNNs) subjected to external almost periodic stimuli, including almost periodic delays. Invariant regions are established for the existence of 2N2^N almost periodic encoded patterns under two classes of activation functions. By employing the property of M\mathscr{M}-cone and inequality technique, attracting basins are estimated and some criteria are derived for the networks to converge exponentially toward 2N2^N almost periodic encoded patterns. The obtained results are new, they extend and generalize the corresponding results existing in previous literature.Comment: 28 pages, 4 figure

    M-matrices and global convergence of discontinuous neural networks

    No full text
    The paper considers a general class of neural networks possessing discontinuous neuron activations and neuron interconnection matrices belonging to the class of M-matrices or H-matrices. A number of results are established on global exponential convergence of the state and output solutions towards a unique equilibrium point. Moreover, by exploiting the presence of sliding modes, conditions are given under which convergence in finite time is guaranteed. In all cases, the exponential convergence rate, or the finite convergence time, can be quantitatively estimated on the basis of the parameters defining the neural network. As a by-product, it is proved that the considered neural networks, although they are described by a system of differential equations with discontinuous right-hand side, enjoy the property of uniqueness of the solution starting at a given initial condition. The results are proved by a generalized Lyapunov-like approach and by using tools from the theory of differential equations with discontinuous right-hand side. At the core of the approach is a basic lemma, which holds under the assumption of M-matrices or H-matrices, and enables to study the limiting behaviour of a suitably defined distance between any pair of solutions to the neural network
    corecore