169,535 research outputs found

    Mutual information and self-control of a fully-connected low-activity neural network

    Full text link
    A self-control mechanism for the dynamics of a three-state fully-connected neural network is studied through the introduction of a time-dependent threshold. The self-adapting threshold is a function of both the neural and the pattern activity in the network. The time evolution of the order parameters is obtained on the basis of a recently developed dynamical recursive scheme. In the limit of low activity the mutual information is shown to be the relevant parameter in order to determine the retrieval quality. Due to self-control an improvement of this mutual information content as well as an increase of the storage capacity and an enlargement of the basins of attraction are found. These results are compared with numerical simulations.Comment: 8 pages, 8 ps.figure

    Topology and Computational Performance of Attractor Neural Networks

    Full text link
    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world and scale-free topologies. The random net is the most efficient for storage and retrieval of patterns by the entire network. However, in the scale-free case retrieval errors are not distributed uniformly: the portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. Implications for brain function and social dynamics are suggestive.Comment: 2 figures included. Submitted to Phys. Rev. Letter

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page
    corecore