229 research outputs found

    Attractors in fully asymmetric neural networks

    Full text link
    The statistical properties of the length of the cycles and of the weights of the attraction basins in fully asymmetric neural networks (i.e. with completely uncorrelated synapses) are computed in the framework of the annealed approximation which we previously introduced for the study of Kauffman networks. Our results show that this model behaves essentially as a Random Map possessing a reversal symmetry. Comparison with numerical results suggests that the approximation could become exact in the infinite size limit.Comment: 23 pages, 6 figures, Latex, to appear on J. Phys.

    Relaxation, closing probabilities and transition from oscillatory to chaotic attractors in asymmetric neural networks

    Full text link
    Attractors in asymmetric neural networks with deterministic parallel dynamics were shown to present a "chaotic" regime at symmetry eta < 0.5, where the average length of the cycles increases exponentially with system size, and an oscillatory regime at high symmetry, where the typical length of the cycles is 2. We show, both with analytic arguments and numerically, that there is a sharp transition, at a critical symmetry \e_c=0.33, between a phase where the typical cycles have length 2 and basins of attraction of vanishing weight and a phase where the typical cycles are exponentially long with system size, and the weights of their attraction basins are distributed as in a Random Map with reversal symmetry. The time-scale after which cycles are reached grows exponentially with system size NN, and the exponent vanishes in the symmetric limit, where T∝N2/3T\propto N^{2/3}. The transition can be related to the dynamics of the infinite system (where cycles are never reached), using the closing probabilities as a tool. We also study the relaxation of the function E(t)=−1/N∑i∣hi(t)∣E(t)=-1/N\sum_i |h_i(t)|, where hih_i is the local field experienced by the neuron ii. In the symmetric system, it plays the role of a Ljapunov function which drives the system towards its minima through steepest descent. This interpretation survives, even if only on the average, also for small asymmetry. This acts like an effective temperature: the larger is the asymmetry, the faster is the relaxation of EE, and the higher is the asymptotic value reached. EE reachs very deep minima in the fixed points of the dynamics, which are reached with vanishing probability, and attains a larger value on the typical attractors, which are cycles of length 2.Comment: 24 pages, 9 figures, accepted on Journal of Physics A: Math. Ge

    A "Cellular Neuronal" Approach to Optimization Problems

    Full text link
    The Hopfield-Tank (1985) recurrent neural network architecture for the Traveling Salesman Problem is generalized to a fully interconnected "cellular" neural network of regular oscillators. Tours are defined by synchronization patterns, allowing the simultaneous representation of all cyclic permutations of a given tour. The network converges to local optima some of which correspond to shortest-distance tours, as can be shown analytically in a stationary phase approximation. Simulated annealing is required for global optimization, but the stochastic element might be replaced by chaotic intermittency in a further generalization of the architecture to a network of chaotic oscillators.Comment: -2nd revised version submitted to Chaos (original version submitted 6/07

    Boolean Dynamics with Random Couplings

    Full text link
    This paper reviews a class of generic dissipative dynamical systems called N-K models. In these models, the dynamics of N elements, defined as Boolean variables, develop step by step, clocked by a discrete time variable. Each of the N Boolean elements at a given time is given a value which depends upon K elements in the previous time step. We review the work of many authors on the behavior of the models, looking particularly at the structure and lengths of their cycles, the sizes of their basins of attraction, and the flow of information through the systems. In the limit of infinite N, there is a phase transition between a chaotic and an ordered phase, with a critical phase in between. We argue that the behavior of this system depends significantly on the topology of the network connections. If the elements are placed upon a lattice with dimension d, the system shows correlations related to the standard percolation or directed percolation phase transition on such a lattice. On the other hand, a very different behavior is seen in the Kauffman net in which all spins are equally likely to be coupled to a given spin. In this situation, coupling loops are mostly suppressed, and the behavior of the system is much more like that of a mean field theory. We also describe possible applications of the models to, for example, genetic networks, cell differentiation, evolution, democracy in social systems and neural networks.Comment: 69 pages, 16 figures, Submitted to Springer Applied Mathematical Sciences Serie

    Global exponential stability of nonautonomous neural network models with unbounded delays

    Get PDF
    For a nonautonomous class of n-dimensional di erential system with in nite delays, we give su cient conditions for its global exponential stability, without showing the existence of an equilibrium point, or a periodic solution, or an almost periodic solution. We apply our main result to several concrete neural network models, studied in the literature, and a comparison of results is given. Contrary to usual in the literature about neural networks, the assumption of bounded coe cients is not need to obtain the global exponential stability. Finally, we present numerical examples to illustrate the e ectiveness of our results.The paper was supported by the Research Center of Mathematics of University of Minho with the Portuguese Funds from the FCT - “Fundação para a CiĂȘncia e a Tecnologia”, through the Project UID/MAT/00013/2013. The author thanks the referees for valuable comments.info:eu-repo/semantics/publishedVersio

    Global point dissipativity of neural networks with mixed time-varying delays

    Get PDF
    By employing the Lyapunov method and some inequality techniques, the global point dissipativity is studied for neural networks with both discrete time-varying delays and distributed time-varying delays. Simple sufficient conditions are given for checking the global point dissipativity of neural networks with mixed time-varying delays. The proposed linear matrix inequality approach is computationally efficient as it can be solved numerically using standard commercial software. Illustrated examples are given to show the usefulness of the results in comparison with some existing results. © 2006 American Institute of Physics.published_or_final_versio

    Existence of global attractor for a nonautonomous state-dependent delay differential equation of neuronal type

    Get PDF
    The analysis of the long-term behavior of the mathematical model of a neural network constitutes a suitable framework to develop new tools for the dynamical description of nonautonomous state-dependent delay equations (SDDEs). The concept of global attractor is given, and some results which establish properties ensuring its existence and providing a description of its shape, are proved. Conditions for the exponential stability of the global attractor are also studied. Some properties of comparison of solutions constitute a key in the proof of the main results, introducing methods of monotonicity in the dynamical analysis of nonautonomous SDDEs. Numerical simulations of some illustrative models show the applicability of the theory.Ministerio de EconomĂ­a y Competitividad / FEDER, MTM2015-66330-PMinisterio de Ciencia, InnovaciĂłn y Universidades, RTI2018-096523-B-I00European Commission, H2020-MSCA-ITN-201

    Phase response function for oscillators with strong forcing or coupling

    Full text link
    Phase response curve (PRC) is an extremely useful tool for studying the response of oscillatory systems, e.g. neurons, to sparse or weak stimulation. Here we develop a framework for studying the response to a series of pulses which are frequent or/and strong so that the standard PRC fails. We show that in this case, the phase shift caused by each pulse depends on the history of several previous pulses. We call the corresponding function which measures this shift the phase response function (PRF). As a result of the introduction of the PRF, a variety of oscillatory systems with pulse interaction, such as neural systems, can be reduced to phase systems. The main assumption of the classical PRC model, i.e. that the effect of the stimulus vanishes before the next one arrives, is no longer a restriction in our approach. However, as a result of the phase reduction, the system acquires memory, which is not just a technical nuisance but an intrinsic property relevant to strong stimulation. We illustrate the PRF approach by its application to various systems, such as Morris-Lecar, Hodgkin-Huxley neuron models, and others. We show that the PRF allows predicting the dynamics of forced and coupled oscillators even when the PRC fails
    • 

    corecore