49 research outputs found

    Designing heteroclinic and excitable networks in phase space using two populations of coupled cells

    Get PDF
    We give a constructive method for realizing an arbitrary directed graph (with no one-cycles) as a heteroclinic or an excitable dynamic network in the phase space of a system of coupled cells of two types. In each case, the system is expressed as a system of first order differential equations. One of the cell types (the pp-cells) interacts by mutual inhibition and classifies which vertex (state) we are currently close to, while the other cell type (the yy-cells) excites the pp-cells selectively and becomes active only when there is a transition between vertices. We exhibit open sets of parameter values such that these dynamical networks exist and demonstrate via numerical simulation that they can be attractors for suitably chosen parameters

    From coupled networks of systems to networks of states in phase space

    Get PDF
    This is the author accepted manuscript. The final version is available from American Institute of Mathematical Sciences (AIMS) via the DOI in this record.Dynamical systems on graphs can show a wide range of behaviours beyond simple synchronization - even simple globally coupled structures can exhibit attractors with intermittent and slow switching between patterns of synchrony. Such attractors, called heteroclinic networks, can be well described as networks in phase space and in this paper we review some results and examples of how these robust attractors can be characterised from the synchrony properties as well how coupled systems can be designed to exhibit given but arbitrary network attractors in phase space

    Sensitive finite state computations using a distributed network with a noisy network attractor

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.We exhibit a class of smooth continuous-state neural-inspired networks composed of simple nonlinear elements that can be made to function as a finite state computational machine. We give an explicit construction of arbitrary finitestate virtual machines in the spatio-temporal dynamics of the network. The dynamics of the functional network can be completely characterised as a “noisy network attractor” in phase space operating in either an “excitable” or a “free-running” regime, respectively corresponding to excitable or heteroclinic connections between states. The regime depends on the sign of an “excitability parameter”. Viewing the network as a nonlinear stochastic differential equation where deterministic (signal) and/or stochastic (noise) input are applied to any element, we explore the influence of signal to noise ratio on the error rate of the computations. The free-running regime is extremely sensitive to inputs: arbitrarily small amplitude perturbations can be used to perform computations with the system as long as the input dominates the noise. We find a counter-intuitive regime where increasing noise amplitude can lead to more, rather than less, accurate computation. We suggest that noisy network attractors will be useful for understanding neural networks that reliably and sensitively perform finite-state computations in a noisy environment.PA gratefully acknowledges the financial support of the EPSRC via grant EP/N014391/1. CMP acknowledges travel funding from the University of Auckland and support from the London Mathematical Laboratory

    Quantifying noisy attractors: from heteroclinic to excitable networks

    Get PDF
    This is the author accepted manuscript. The final version is available from the Society for Industrial and Applied Mathematics via the DOI in this record.Attractors of dynamical systems may be networks in phase space that can be heteroclinic (where there are dynamical connections between simple invariant sets) or excitable (where a perturbation threshold needs to be crossed to a dynamical connection between “nodes”). Such network attractors can display a high degree of sensitivity to noise both in terms of the regions of phase space visited and in terms of the sequence of transitions around the network. The two types of network are intimately related—one can directly bifurcate to the other. In this paper we attempt to quantify the effect of additive noise on such network attractors. Noise increases the average rate at which the networks are explored, and can result in “macroscopic” random motion around the network. We perform an asymptotic analysis of local behaviour of an escape model near heteroclinic/excitable nodes in the limit of noise η → 0 + as a model for the mean residence time T near equilibria. The heteroclinic network case has T proportional to − ln η while the excitable network has T given by a Kramers’ law, proportional to exp(B/η2 ). There is singular scaling behaviour (where T is proportional to 1/η) at the bifurcation between the two types of network. We also explore transition probabilities between nodes of the network in the presence of anisotropic noise. For low levels of noise, numerical results suggest that a (heteroclinic or excitable) network can approximately realise any set of transition probabilities and any sufficiently large mean residence times at the given nodes. We show that this can be well modelled in our example network by multiple independent escape processes, where the direction of first escape determines the transition. This suggests that it is feasible to design noisy network attractors with arbitrary Markov transition probabilities and residence times.We thank many people for stimulating conversations that contributed to the development of this paper: in particular Chris Bick, Nils Berglund, Mike Field, John Terry, Ilze Ziedins. We thank the London Mathematical Society for support of a visit of CMP to Exeter, and the University of Auckland Research Council for supporting a visit of PA to Auckland during the development of this research. PA gratefully acknowledges the financial support of the EPSRC via grant EP/N014391/1

    Mathematical frameworks for oscillatory network dynamics in neuroscience

    Get PDF
    The tools of weakly coupled phase oscillator theory have had a profound impact on the neuroscience community, providing insight into a variety of network behaviours ranging from central pattern generation to synchronisation, as well as predicting novel network states such as chimeras. However, there are many instances where this theory is expected to break down, say in the presence of strong coupling, or must be carefully interpreted, as in the presence of stochastic forcing. There are also surprises in the dynamical complexity of the attractors that can robustly appear—for example, heteroclinic network attractors. In this review we present a set of mathemat- ical tools that are suitable for addressing the dynamics of oscillatory neural networks, broadening from a standard phase oscillator perspective to provide a practical frame- work for further successful applications of mathematics to understanding network dynamics in neuroscience

    Almost complete and equable heteroclinic networks

    Get PDF
    Heteroclinic connections are trajectories that link invariant sets for an autonomous dynamical flow: these connections can robustly form networks between equilibria, for systems with flow-invariant spaces. In this paper we examine the relation between the heteroclinic network as a flow-invariant set and directed graphs of possible connections between nodes. We consider realizations of a large class of transitive digraphs as robust heteroclinic networks and show that although robust realizations are typically not complete (i.e. not all unstable manifolds of nodes are part of the network), they can be almost complete (i.e. complete up to a set of zero measure within the unstable manifold) and equable (i.e. all sets of connections from a node have the same dimension). We show there are almost complete and equable realizations that can be closed by adding a number of extra nodes and connections. We discuss some examples and describe a sense in which an equable almost complete network embedding is an optimal description of stochastically perturbed motion on the network

    Excitable networks for finite state computation with continuous time recurrent neural networks

    Get PDF
    This is the final version. Available on open access from Springer via the DOI in this recordContinuous time recurrent neural networks (CTRNN) are systems of coupled ordinary differential equations that are simple enough to be insightful for describing learning and computation, from both biological and machine learning viewpoints. We describe a direct constructive method of realising finite state input-dependent computations on an arbitrary directed graph. The constructed system has an excitable network attractor whose dynamics we illustrate with a number of examples. The resulting CTRNN has intermittent dynamics: trajectories spend long periods of time close to steady-state, with rapid transitions between states. Depending on parameters, transitions between states can either be excitable (inputs or noise needs to exceed a threshold to induce the transition), or spontaneous (transitions occur without input or noise). In the excitable case, we show the threshold for excitability can be made arbitrarily sensitive.Royal Society Te ApārangiEngineering and Physical Sciences Research Council (EPSRC)London Mathematical Laborator

    Noisy network attractor models for transitions between EEG microstates

    Get PDF
    This is the final version. Available on open access from Springer via the DOI in this recordAvailability of data and material: The data used in this paper are published in [18] and are available from JB on reasonable request.The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) provide anatomical and temporal information about the resting-state networks (RSNs), respectively. EEG microstates are brief periods of stable scalp topography, and four distinct configurations with characteristic switching patterns between them are reliably identified at rest. Microstates have been identified as the electrophysiological correlate of fMRI-defined RSNs, this link could be established because EEG microstate sequences are scale-free and have long-range temporal correlations. This property is crucial for any approach to model EEG microstates. This paper proposes a novel modeling approach for microstates: we consider nonlinear stochastic differential equations (SDEs) that exhibit a noisy network attractor between nodes that represent the microstates. Using a single layer network between four states, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. Introducing a two layer network with a hidden layer gives the flexibility to capture these heavy tails and their long-range temporal correlations. We fit these models to capture the statistical properties of microstate sequences from EEG data recorded inside and outside the MRI scanner and show that the processing required to separate the EEG signal from the fMRI machine noise results in a loss of information which is reflected in differences in the long tail of the dwell-time distributions.Engineering and Physical Sciences Research Council (EPSRC)Medical Research Council (MRC)Marsden Fund, Royal Society of New Zealan

    Interpreting recurrent neural networks behaviour via excitable network attractors

    Get PDF
    Introduction: Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black-boxes where it is difficult to fully understand what the machine is doing or how it is operating. This poses constraints on the applicability and explainability of such methods. Methods: Our research aims to open the black-box of recurrent neural networks, an important family of neural networks used for processing sequential data. We propose a novel methodology that provides a mechanistic interpretation of behaviour when solving a computational task. Our methodology uses mathematical constructs called excitable network attractors, which are invariant sets in phase space composed of stable attractors and excitable connections between them. Results and Discussion: As the behaviour of recurrent neural networks depends both on training and on inputs to the system, we introduce an algorithm to extract network attractors directly from the trajectory of a neural network while solving tasks. Simulations conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable states and transitions between them.Comment: revised versio
    corecore