28 research outputs found

    Novel fixed-time stabilization of quaternion-valued BAMNNs with disturbances and time-varying coefficients

    Get PDF
    In this paper, with the quaternion number and time-varying coefficients introduced into traditional BAMNNs, the model of quaternion-valued BAMNNs are formulated. For the first time, fixed-time stabilization of time-varying quaternion-valued BAMNNs is investigated. A novel fixed-time control method is adopted, in which the choice of the Lyapunov function is more general than in most previous results. To cope with the noncommutativity of the quaternion multiplication, two different fixed-time control methods are provided, a decomposition method and a non-decomposition method. Furthermore, to reduce the control strength and improve control efficiency, an adaptive fixed-time control strategy is proposed. Lastly, numerical examples are presented to demonstrate the effectiveness of the theoretical results. © 2020 the Author(s), licensee AIMS Press

    Convergence of Discrete-Time Cellular Neural Networks with Application to Image Processing

    Get PDF
    The paper considers a class of discrete-time cellular neural networks (DT-CNNs) obtained by applying Euler's discretization scheme to standard CNNs. Let T be the DT-CNN interconnection matrix which is defined by the feedback cloning template. The paper shows that a DT-CNN is convergent, i.e. each solution tends to an equilibrium point, when T is symmetric and, in the case where T + En is not positive-semidefinite, the step size of Euler's discretization scheme does not exceed a given bound (En is the n Ă— n unit matrix). It is shown that two relevant properties hold as a consequence of the local and space-invariant interconnecting structure of a DT-CNN, namely: (1) the bound on the step size can be easily estimated via the elements of the DT-CNN feedback cloning template only; (2) the bound is independent of the DT-CNN dimension. These two properties make DT-CNNs very effective in view of computer simulations and for the practical applications to high-dimensional processing tasks. The obtained results are proved via Lyapunov approach and LaSalle's Invariance Principle in combination with some fundamental inequalities enjoyed by the projection operator on a convex set. The results are compared with previous ones in the literature on the convergence of DT-CNNs and also with those obtained for different neural network models as the Brain-State-in-a-Box model. Finally, the results on convergence are illustrated via the application to some relevant 2D and 1D DT-CNNs for image processing tasks

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    corecore