22 research outputs found
Analyzing Stability of Equilibrium Points in Neural Networks: A General Approach
Networks of coupled neural systems represent an important class of models in
computational neuroscience. In some applications it is required that
equilibrium points in these networks remain stable under parameter variations.
Here we present a general methodology to yield explicit constraints on the
coupling strengths to ensure the stability of the equilibrium point. Two models
of coupled excitatory-inhibitory oscillators are used to illustrate the
approach.Comment: 20 pages, 4 figure
Editorial [to] Analysis of nonlinear dynamics of neural networks
Fundação para a Ciência e a Tecnologia (FCT
Topological geometry analysis for complex dynamic systems based on adaptive control method
Several models had been proposed for dynamic systems, and different criteria had been analyzed for such models such as Hamiltonian, synchronization, Lyapunov expansion, and stability. The geometry criteria play a significant part in analyzing dynamic systems and some study articles analyze the geometry of such topics. The synchronization and the complex-network control with specified topology; meanwhile, the exact topology may be unknown. In the present paper, and by making use of the adaptive control method, a proposed control method is developed to determine the actual topology. The basic idea in the proposed method is to receive evolution of the network-node
Learning for System Identification of NDAE-modeled Power Systems
System identification through learning approaches is emerging as a promising
strategy for understanding and simulating dynamical systems, which nevertheless
faces considerable difficulty when confronted with power systems modeled by
differential-algebraic equations (DAEs). This paper introduces a neural network
(NN) framework for effectively learning and simulating solution trajectories of
DAEs. The proposed framework leverages the synergy between Implicit Runge-Kutta
(IRK) time-stepping schemes tailored for DAEs and NNs (including a differential
NN (DNN)). The framework enforces an NN to cooperate with the algebraic
equation of DAEs as hard constraints and is suitable for the identification of
the ordinary differential equation (ODE)-modeled dynamic equation of DAEs using
an existing penalty-based algorithm. Finally, the paper demonstrates the
efficacy and precision of the proposed NN through the identification and
simulation of solution trajectories for the considered DAE-modeled power
system
Analysis of Nonlinear Dynamics of Neural Networks
[No abstract available]Publisher's Versio
Modelling and Contractivity of Neural-Synaptic Networks with Hebbian Learning
This paper is concerned with the modelling and analysis of two of the most
commonly used recurrent neural network models (i.e., Hopfield neural network
and firing-rate neural network) with dynamic recurrent connections undergoing
Hebbian learning rules. To capture the synaptic sparsity of neural circuits we
propose a low dimensional formulation. We then characterize certain key
dynamical properties. First, we give biologically-inspired forward invariance
results. Then, we give sufficient conditions for the non-Euclidean
contractivity of the models. Our contraction analysis leads to stability and
robustness of time-varying trajectories -- for networks with both excitatory
and inhibitory synapses governed by both Hebbian and anti-Hebbian rules. For
each model, we propose a contractivity test based upon biologically meaningful
quantities, e.g., neural and synaptic decay rate, maximum in-degree, and the
maximum synaptic strength. Then, we show that the models satisfy Dale's
Principle. Finally, we illustrate the effectiveness of our results via a
numerical example.Comment: 24 pages, 4 figure
Singular Perturbation via Contraction Theory
In this paper, we provide a novel contraction-theoretic approach to analyze
two-time scale systems. In our proposed framework, systems enjoy several
robustness properties, which can lead to a more complete characterization of
their behaviors. Key assumptions are the contractivity of the fast sub-system
and of the reduced model, combined with an explicit upper bound on the
time-scale parameter. For two-time scale systems subject to disturbances, we
show that the distance between solutions of the nominal system and solutions of
its reduced model is uniformly upper bounded by a function of contraction
rates, Lipschitz constants, the time-scale parameter, and the time variability
of the disturbances. We also show local contractivity of the two-time scale
system and give sufficient conditions for global contractivity. We then
consider two special cases: for autonomous nonlinear systems we obtain sharper
bounds than our general results and for linear time-invariant systems we
present novel bounds based upon log norms and induced norms. Finally, we apply
our theory to two application areas -- online feedback optimization and
Stackelberg games -- and obtain new individual tracking error bounds showing
that solutions converge to their (time-varying) optimizer and computing overall
contraction rates.Comment: This paper has been submitted to IEEE Transactions on Automatic
Contro