386 research outputs found

    Neural Networks With Asynchronous Control.

    Get PDF
    Neural network studies have previously focused on monolithic structures. The brain has a bicameral nature, however, and so it is natural to expect that bicameral structures will perform better. This dissertation offers an approach to the development of such bicameral structures. The companion neural structure takes advantage of the global and subset characteristics of the stored memories. Specifically we propose the use of an asynchronous controller C that implies the following update of a probe vector x by the connection matrix T: x\sp\prime = sgn (C(x, TX)). For a VLSI-implemented neural network the controller block can be easily placed in the feedback loop. In a network running asynchronously, the updating of the probe generally offers a choice among several components. If the right components are not updated the network may converge to an incorrect stable point. The proposed asynchronous controller together with the basic neural net forms a bicameral network that can be programmed in various ways to exploit global and local characteristics of stored memory. Several methods to do this are proposed. In one of the methods the update choices are based on bit frequencies. In another method handles are appended to the memories to improve retrieval. The new methods have been analyzed and their performance studies it is shown that there is a marked improvement in performance. This is illustrated by means of simulations. The use of an asynchronous controller allows the implementation of conditional rules that occur frequently in AI applications. It is shown that a neural network that uses conditional rules can solve problems in natural language understanding. The introduction of the asynchronous controller may be viewed as a first step in the development of truly bicameral structures that may be seen as the next generation of neural computers

    Cortical free association dynamics: distinct phases of a latching network

    Full text link
    A Potts associative memory network has been proposed as a simplified model of macroscopic cortical dynamics, in which each Potts unit stands for a patch of cortex, which can be activated in one of S local attractor states. The internal neuronal dynamics of the patch is not described by the model, rather it is subsumed into an effective description in terms of graded Potts units, with adaptation effects both specific to each attractor state and generic to the patch. If each unit, or patch, receives effective (tensor) connections from C other units, the network has been shown to be able to store a large number p of global patterns, or network attractors, each with a fraction a of the units active, where the critical load p_c scales roughly like p_c ~ (C S^2)/(a ln(1/a)) (if the patterns are randomly correlated). Interestingly, after retrieving an externally cued attractor, the network can continue jumping, or latching, from attractor to attractor, driven by adaptation effects. The occurrence and duration of latching dynamics is found through simulations to depend critically on the strength of local attractor states, expressed in the Potts model by a parameter w. Here we describe with simulations and then analytically the boundaries between distinct phases of no latching, of transient and sustained latching, deriving a phase diagram in the plane w-T, where T parametrizes thermal noise effects. Implications for real cortical dynamics are briefly reviewed in the conclusions

    Global adaptation in networks of selfish components: emergent associative memory at the system scale

    No full text
    In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning

    An examination and analysis of the Boltzmann machine, its mean field theory approximation, and learning algorithm

    Get PDF
    It is currently believed that artificial neural network models may form the basis for inte1ligent computational devices. The Boltzmann Machine belongs to the class of recursive artificial neural networks and uses a supervised learning algorithm to learn the mapping between input vectors and desired outputs. This study examines the parameters that influence the performance of the Boltzmann Machine learning algorithm. Improving the performance of the algorithm through the use of a naĂŻve mean field theory approximation is also examined. The study was initiated to examine the hypothesis that the Boltzmann Machine learning algorithm, when used with the mean field approximation, is an efficient, reliable, and flexible model of machine learning. An empirical analysis of the performance of the algorithm supports this hypothesis. The performance of the algorithm is investigated by applying it to training the Boltzmann Machine, and its mean field approximation, the exclusive-Or function. Simulation results suggest that the mean field theory approximation learns faster than the Boltzmann Machine, and shows better stability. The size of the network and the learning rate were found to have considerable impact upon the performance of the algorithm, especially in the case of the mean field theory approximation. A comparison is made with the feed forward back propagation paradigm and it is found that the back propagation network learns the exclusive-Or function eight times faster than the mean field approximation. However, the mean field approximation demonstrated better reliability and stability. Because the mean field approximation is local and asynchronous it has an advantage over back propagation with regard to a parallel implementation. The mean field approximation is domain independent and structurally flexible. These features make the network suitable for use with a structural adaption algorithm, allowing the network to modify its architecture in response to the external environment

    Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks

    No full text
    The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks
    • …
    corecore