733 research outputs found

    The Performance of Associative Memory Models with Biologically Inspired Connectivity

    Get PDF
    This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance. In recent years, research on the mammalian cerebral cortex, which has the main responsibility for the associative memory function in the brains, suggests that the connectivity of this cortical network is far from fully connected, which is commonly assumed in traditional associative memory models. It is found to be a sparse network with interesting connectivity characteristics such as the “small world network” characteristics, represented by short Mean Path Length, high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected. There is, however, no conclusive evidence of how these different connectivity characteristics affect the associative memory performance of a network. This thesis addresses this question using networks with different types of connectivity, which are inspired from biological evidences. The findings of this programme are unexpected and important. Results show that the performance of a non-spiking associative memory model is found to be predicted by its linear correlation with the Clustering Coefficient of the network, regardless of the detailed connectivity patterns. This is particularly important because the Clustering Coefficient is a static measure of one aspect of connectivity, whilst the associative memory performance reflects the result of a complex dynamic process. On the other hand, this research reveals that improvements in the performance of a network do not necessarily directly rely on an increase in the network’s wiring cost. Therefore it is possible to construct networks with high associative memory performance but relatively low wiring cost. Particularly, Gaussian distributed connectivity in a network is found to achieve the best performance with the lowest wiring cost, in all examined connectivity models. Our results from this programme also suggest that a modular network with an appropriate configuration of Gaussian distributed connectivity, both internal to each module and across modules, can perform nearly as well as the Gaussian distributed non-modular network. Finally, a comparison between non-spiking and spiking associative memory models suggests that in terms of associative memory performance, the implication of connectivity seems to transcend the details of the actual neural models, that is, whether they are spiking or non-spiking neurons

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Efficient Methods for Unsupervised Learning of Probabilistic Models

    Full text link
    In this thesis I develop a variety of techniques to train, evaluate, and sample from intractable and high dimensional probabilistic models. Abstract exceeds arXiv space limitations -- see PDF

    AI of Brain and Cognitive Sciences: From the Perspective of First Principles

    Full text link
    Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development.Comment: 59 pages, 5 figures, review articl

    Attractors, memory and perception

    Get PDF
    In this Thesis, the first three introductory chapters are devoted to the review of literature on contextual perception, its neural basis and network modeling of memory. In chapter 4, the first two sections give the definition of our model; and the next two sections, 4.3 and 4.4, report the original work of mine on retrieval properties of different network structures and network dynamics underlying the response to ambiguous patterns, respectively. The reported work in chapter 5 has been done in collaboration with Prof Bharathi Jagadeesh in University of Washington, and is already published in the journal \u201dCerebral Cortex\u201d. In this collaboration, Yan Liu, from the group in Seattle, carried out the recording experiments and I did the data analysis and network simulations. Chapter 6, which represents a network model for \u201dpriming\u201d and \u201dadaptation aftereffect\u201d is done by me. The works reported in 4.3, 4.5, and the whole chapter 6 are in preparation for publication

    The stability and attractivity of neural associative memories.

    Get PDF
    Han-bing Ji.Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (p. 160-163).Microfiche. Ann Arbor, Mich.: UMI, 1998. 2 microfiches ; 11 x 15 cm

    Hardware Architectures and Implementations for Associative Memories : the Building Blocks of Hierarchically Distributed Memories

    Get PDF
    During the past several decades, the semiconductor industry has grown into a global industry with revenues around $300 billion. Intel no longer relies on only transistor scaling for higher CPU performance, but instead, focuses more on multiple cores on a single die. It has been projected that in 2016 most CMOS circuits will be manufactured with 22 nm process. The CMOS circuits will have a large number of defects. Especially when the transistor goes below sub-micron, the original deterministic circuits will start having probabilistic characteristics. Hence, it would be challenging to map traditional computational models onto probabilistic circuits, suggesting a need for fault-tolerant computational algorithms. Biologically inspired algorithms, or associative memories (AMs)—the building blocks of cortical hierarchically distributed memories (HDMs) discussed in this dissertation, exhibit a remarkable match to the nano-scale electronics, besides having great fault-tolerance ability. Research on the potential mapping of the HDM onto CMOL (hybrid CMOS/nanoelectronic circuits) nanogrids provides useful insight into the development of non-von Neumann neuromorphic architectures and semiconductor industry. In this dissertation, we investigated the implementations of AMs on different hardware platforms, including microprocessor based personal computer (PC), PC cluster, field programmable gate arrays (FPGA), CMOS, and CMOL nanogrids. We studied two types of neural associative memory models, with and without temporal information. In this research, we first decomposed the computational models into basic and common operations, such as matrix-vector inner-product and k-winners-take-all (k-WTA). We then analyzed the baseline performance/price ratio of implementing the AMs with a PC. We continued with a similar performance/price analysis of the implementations on more parallel hardware platforms, such as PC cluster and FPGA. However, the majority of the research emphasized on the implementations with all digital and mixed-signal full-custom CMOS and CMOL nanogrids. In this dissertation, we draw the conclusion that the mixed-signal CMOL nanogrids exhibit the best performance/price ratio over other hardware platforms. We also highlighted some of the trade-offs between dedicated and virtualized hardware circuits for the HDM models. A simple time-multiplexing scheme for the digital CMOS implementations can achieve comparable throughput as the mixed-signal CMOL nanogrids

    Sleep targets highly connected global and local nodes to aid consolidation of learned graph networks

    Get PDF
    Much of our long-term knowledge is organised in complex networks. Sleep is thought to be critical for abstracting knowledge and enhancing important item memory for long-term retention. Thus, sleep should aid the development of memory for networks and the abstraction of their structure for efficient storage. However, this remains unknown because past sleep studies have focused on discrete items. Here we explored the impact of sleep (night-sleep/day-wake within-subject paradigm with 25 male participants) on memory for graph-networks where some items were important due to dense local connections (degree centrality) or, independently, important due to greater global connections (closeness/betweenness centrality). A network of 27 planets (nodes) sparsely interconnected by 36 teleporters (edges) was learned via discrete associations without explicit indication of any network structure. Despite equivalent exposure to all connections in the network, we found that memory for the links between items with high local connectivity or high global connectivity were better retained after sleep. These results highlight that sleep has the capacity for strengthening both global and local structure from the world and abstracting over multiple experiences to efficiently form internal networks of knowledge
    • …
    corecore