645 research outputs found
Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays
Copyright [2009] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, we introduce a new class of discrete-time neural networks (DNNs) with Markovian jumping parameters as well as mode-dependent mixed time delays (both discrete and distributed time delays). Specifically, the parameters of the DNNs are subject to the switching from one to another at different times according to a Markov chain, and the mixed time delays consist of both discrete and distributed delays that are dependent on the Markovian jumping mode. We first deal with the stability analysis problem of the addressed neural networks. A special inequality is developed to account for the mixed time delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the stochastic stability. We then turn to the synchronization problem among an array of identical coupled Markovian jumping neural networks with mixed mode-dependent time delays. By utilizing the Lyapunov stability theory and the Kronecker product, it is shown that the addressed synchronization problem is solvable if several LMIs are feasible. Hence, different from the commonly used matrix norm theories (such as the M-matrix method), a unified LMI approach is developed to solve the stability analysis and synchronization problems of the class of neural networks under investigation, where the LMIs can be easily solved by using the available Matlab LMI toolbox. Two numerical examples are presented to illustrate the usefulness and effectiveness of the main results obtained
Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation
We introduce Equilibrium Propagation, a learning framework for energy-based
models. It involves only one kind of neural computation, performed in both the
first phase (when the prediction is made) and the second phase of training
(after the target or prediction error is revealed). Although this algorithm
computes the gradient of an objective function just like Backpropagation, it
does not need a special computation or circuit for the second phase, where
errors are implicitly propagated. Equilibrium Propagation shares similarities
with Contrastive Hebbian Learning and Contrastive Divergence while solving the
theoretical issues of both algorithms: our algorithm computes the gradient of a
well defined objective function. Because the objective function is defined in
terms of local perturbations, the second phase of Equilibrium Propagation
corresponds to only nudging the prediction (fixed point, or stationary
distribution) towards a configuration that reduces prediction error. In the
case of a recurrent multi-layer supervised network, the output units are
slightly nudged towards their target in the second phase, and the perturbation
introduced at the output layer propagates backward in the hidden layers. We
show that the signal 'back-propagated' during this second phase corresponds to
the propagation of error derivatives and encodes the gradient of the objective
function, when the synaptic update corresponds to a standard form of
spike-timing dependent plasticity. This work makes it more plausible that a
mechanism similar to Backpropagation could be implemented by brains, since
leaky integrator neural computation performs both inference and error
back-propagation in our model. The only local difference between the two phases
is whether synaptic changes are allowed or not
Robust short-term memory without synaptic learning
Short-term memory in the brain cannot in general be explained the way
long-term memory can -- as a gradual modification of synaptic weights -- since
it takes place too quickly. Theories based on some form of cellular
bistability, however, do not seem able to account for the fact that noisy
neurons can collectively store information in a robust manner. We show how a
sufficiently clustered network of simple model neurons can be instantly induced
into metastable states capable of retaining information for a short time (a few
seconds). The mechanism is robust to different network topologies and kinds of
neural model. This could constitute a viable means available to the brain for
sensory and/or short-term memory with no need of synaptic learning. Relevant
phenomena described by neurobiology and psychology, such as local
synchronization of synaptic inputs and power-law statistics of forgetting
avalanches, emerge naturally from this mechanism, and we suggest possible
experiments to test its viability in more biological settings.Comment: 20 pages, 9 figures. Amended to include section on spiking neurons,
with general rewrit
State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays
Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313
- …