911 research outputs found

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    A linear approach for sparse coding by a two-layer neural network

    Full text link
    Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neural network. Importantly, the linearity of SCNN and the choice of the error function allow one to achieve reduced running time in the learning phase. The proposed architecture is evaluated on the basis of two standard machine learning tasks. Its performances are compared with those of recently proposed non-linear auto-associative neural networks. The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase

    Disappearance of Spurious States in Analog Associative Memories

    Full text link
    We show that symmetric n-mixture states, when they exist, are almost never stable in autoassociative networks with threshold-linear units. Only with a binary coding scheme we could find a limited region of the parameter space in which either 2-mixtures or 3-mixtures are stable attractors of the dynamics.Comment: 5 pages, 3 figures, accepted for publication in Phys Rev

    Microbial life cycles link global modularity in regulation to mosaic evolution

    Full text link
    Microbes are exposed to changing environments, to which they can respond by adopting various lifestyles such as swimming, colony formation or dormancy. These lifestyles are often studied in isolation, thereby giving a fragmented view of the life cycle as a whole. Here, we study lifestyles in the context of this whole. We first use machine learning to reconstruct the expression changes underlying life cycle progression in the bacterium Bacillus subtilis, based on hundreds of previously acquired expression profiles. This yields a timeline that reveals the modular organization of the life cycle. By analysing over 380 Bacillales genomes, we then show that life cycle modularity gives rise to mosaic evolution in which life stages such as motility and sporulation are conserved and lost as discrete units. We postulate that this mosaic conservation pattern results from habitat changes that make these life stages obsolete or detrimental. Indeed, when evolving eight distinct Bacillales strains and species under laboratory conditions that favour colony growth, we observe rapid and parallel losses of the sporulation life stage across species, induced by mutations that affect the same global regulator. We conclude that a life cycle perspective is pivotal to understanding the causes and consequences of modularity in both regulation and evolution
    corecore