1,040 research outputs found

    Optimisation in ‘Self-modelling’ Complex Adaptive Systems

    No full text
    When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘recalling’ low energy states that have been previously visited but ‘predicting’ their location by generalising over local attractor states that have already been visited. This ‘self-modelling’ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems

    Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks

    No full text
    The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks

    The evolution of phenotypic correlations and “developmental memory”

    No full text
    Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent nonlinear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can “store” and “recall” multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and “generalize” (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviors follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well-understood in the context of neural networks. This helps to explain how development facilitates the evolution of high-fitness phenotypes and how this ability changes over evolutionary time

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    Robust short-term memory without synaptic learning

    Get PDF
    Short-term memory in the brain cannot in general be explained the way long-term memory can -- as a gradual modification of synaptic weights -- since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.Comment: 20 pages, 9 figures. Amended to include section on spiking neurons, with general rewrit

    Reducing a cortical network to a Potts model yields storage capacity estimates

    Get PDF
    An autoassociative network of Potts units, coupled via tensor connections, has been proposed and analysed as an effective model of an extensive cortical network with distinct short- and long-range synaptic connections, but it has not been clarified in what sense it can be regarded as an effective model. We draw here the correspondence between the two, which indicates the need to introduce a local feedback term in the reduced model, i.e., in the Potts network. An effective model allows the study of phase transitions. As an example, we study the storage capacity of the Potts network with this additional term, the local feedback w, which contributes to drive the activity of the network towards one of the stored patterns. The storage capacity calculation, performed using replica tools, is limited to fully connected networks, for which a Hamiltonian can be defined. To extend the results to the case of intermediate partial connectivity, we also derive the self-consistent signal-to-noise analysis for the Potts network; and finally we discuss implications for semantic memory in humans
    • 

    corecore