209 research outputs found
Recent Advances and Applications of Fractional-Order Neural Networks
This paper focuses on the growth, development, and future of various forms of fractional-order neural networks. Multiple advances in structure, learning algorithms, and methods have been critically investigated and summarized. This also includes the recent trends in the dynamics of various fractional-order neural networks. The multiple forms of fractional-order neural networks considered in this study are Hopfield, cellular, memristive, complex, and quaternion-valued based networks. Further, the application of fractional-order neural networks in various computational fields such as system identification, control, optimization, and stability have been critically analyzed and discussed
Finite-time stabilization for fractional-order inertial neural networks with time varying delays
This paper deals with the finite-time stabilization of fractional-order inertial neural network with varying time-delays (FOINNs). Firstly, by correctly selected variable substitution, the system is transformed into a first-order fractional differential equation. Secondly, by building Lyapunov functionalities and using analytical techniques, as well as new control algorithms (which include the delay-dependent and delay-free controller), novel and effective criteria are established to attain the finite-time stabilization of the addressed system. Finally, two examples are used to illustrate the effectiveness and feasibility of the obtained results
Synchronization analysis of coupled fractional-order neural networks with time-varying delays
In this paper, the complete synchronization and Mittag-Leffler synchronization problems of a kind of coupled fractional-order neural networks with time-varying delays are introduced and studied. First, the sufficient conditions for a controlled system to reach complete synchronization are established by using the Kronecker product technique and Lyapunov direct method under pinning control. Here the pinning controller only needs to control part of the nodes, which can save more resources. To make the system achieve complete synchronization, only the error system is stable. Next, a new adaptive feedback controller is designed, which combines the Razumikhin-type method and Mittag-Leffler stability theory to make the controlled system realize Mittag-Leffler synchronization. The controller has time delays, and the calculation can be simplified by constructing an appropriate auxiliary function. Finally, two numerical examples are given. The simulation process shows that the conditions of the main theorems are not difficult to obtain, and the simulation results confirm the feasibility of the theorems
ψ-type stability of reaction–diffusion neural networks with time-varying discrete delays and bounded distributed delays
In this paper, the ψ-type stability and robust ψ-type stability for reaction–diffusion neural networks (RDNNs) with Dirichlet boundary conditions, time-varying discrete delays and bounded distributed delays are investigated, respectively. Firstly, we analyze the ψ-type stability and robust ψ-type stability of RDNNs with time-varying discrete delays by means of ψ-type functions combined with some inequality techniques, and put forward several ψ-type stability criteria for the considered networks. Additionally, the models of RDNNs with bounded distributed delays are established and some sufficient conditions to guarantee the ψ-type stability and robust ψ-type stability are given. Lastly, two examples are provided to confirm the effectiveness of the derived results
Finite-time Anti-synchronization of Memristive Stochastic BAM Neural Networks with Probabilistic Time-varying Delays
This paper investigates the drive-response finite-time anti-synchronization for memristive bidirectional associative memory neural networks (MBAMNNs). Firstly, a class of MBAMNNs with mixed probabilistic time-varying delays and stochastic perturbations is first formulated and analyzed in this paper. Secondly, an nonlinear control law is constructed and utilized to guarantee drive-response finite-time anti-synchronization of the neural networks. Thirdly, by employing some inequality technique and constructing an appropriate Lyapunov function, some anti-synchronization criteria are derived. Finally, a number simulation is provided to demonstrate the effectiveness of the proposed mechanism
Recommended from our members
Novel fixed-time stabilization of quaternion-valued BAMNNs with disturbances and time-varying coefficients
In this paper, with the quaternion number and time-varying coefficients introduced into traditional BAMNNs, the model of quaternion-valued BAMNNs are formulated. For the first time, fixed-time stabilization of time-varying quaternion-valued BAMNNs is investigated. A novel fixed-time control method is adopted, in which the choice of the Lyapunov function is more general than in most previous results. To cope with the noncommutativity of the quaternion multiplication, two different fixed-time control methods are provided, a decomposition method and a non-decomposition method. Furthermore, to reduce the control strength and improve control efficiency, an adaptive fixed-time control strategy is proposed. Lastly, numerical examples are presented to demonstrate the effectiveness of the theoretical results. © 2020 the Author(s), licensee AIMS Press
Dimensions of Timescales in Neuromorphic Computing Systems
This article is a public deliverable of the EU project "Memory technologies
with multi-scale time constants for neuromorphic architectures" (MeMScales,
https://memscales.eu, Call ICT-06-2019 Unconventional Nanoelectronics, project
number 871371). This arXiv version is a verbatim copy of the deliverable
report, with administrative information stripped. It collects a wide and varied
assortment of phenomena, models, research themes and algorithmic techniques
that are connected with timescale phenomena in the fields of computational
neuroscience, mathematics, machine learning and computer science, with a bias
toward aspects that are relevant for neuromorphic engineering. It turns out
that this theme is very rich indeed and spreads out in many directions which
defy a unified treatment. We collected several dozens of sub-themes, each of
which has been investigated in specialized settings (in the neurosciences,
mathematics, computer science and machine learning) and has been documented in
its own body of literature. The more we dived into this diversity, the more it
became clear that our first effort to compose a survey must remain sketchy and
partial. We conclude with a list of insights distilled from this survey which
give general guidelines for the design of future neuromorphic systems
- …