2 research outputs found

    Maximal switchability of centralized networks

    Full text link
    We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of Ns weakly connected satellites, a property that we call n/Ns-centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience

    Stable-yet-Switchable (SyS) Attractor Networks

    No full text
    Abstract: Recurrent neural networks functioning as associative memories are often studied and optimized for recall quality and capacity, with the focus primarily on the network’s stability, i.e., convergence to stored attractors. However, the ability of networks to switch between attractors in a controlled way is also potentially a useful phenomenon. Networks that are stable under most conditions, but can be switched by specific stimuli may be used to model cognitive control and other timevarying cognitive phenomena. Such networks, which we term stable-yet-switchable (SyS) networks, are also of interest from the networks perspective, and the SyS properties of scale-free networks have been noted by researchers. In this paper, we consider networks with bimodal connectivity – a core of densely connected neurons and a larger periphery with sparser connectivity – and compare their SyS performance with random and scale-free recurrent neural networks. The results show that core-periphery networks have much better SyS performance than scale-free networks. I
    corecore