13,581 research outputs found

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    High performance associative memory models and weight dilution

    Get PDF
    The consequences of diluting the weights of the standard Hopfield architecture associative memory model, trained using perceptron like learning rules, is examined. A proportion of the weights of the network are removed; this can be done in a symmetric and asymmetric way and both methods are investigated. This paper reports experimental investigations into the consequences of dilution in terms of: capacity, training times and size of basins of attraction. It is concluded that these networks maintain a reasonable performance at fairly high dilution rates.Final Accepted Versio

    Optimisation in ‘Self-modelling’ Complex Adaptive Systems

    No full text
    When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘recalling’ low energy states that have been previously visited but ‘predicting’ their location by generalising over local attractor states that have already been visited. This ‘self-modelling’ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems

    Transient Dynamics of Sparsely Connected Hopfield Neural Networks with Arbitrary Degree Distributions

    Full text link
    Using probabilistic approach, the transient dynamics of sparsely connected Hopfield neural networks is studied for arbitrary degree distributions. A recursive scheme is developed to determine the time evolution of overlap parameters. As illustrative examples, the explicit calculations of dynamics for networks with binomial, power-law, and uniform degree distribution are performed. The results are good agreement with the extensive numerical simulations. It indicates that with the same average degree, there is a gradual improvement of network performance with increasing sharpness of its degree distribution, and the most efficient degree distribution for global storage of patterns is the delta function.Comment: 11 pages, 5 figures. Any comments are favore
    corecore