616 research outputs found

    The Missing Link between Morphemic Assemblies and Behavioral Responses:a Bayesian Information-Theoretical model of lexical processing

    Get PDF
    We present the Bayesian Information-Theoretical (BIT) model of lexical processing: A mathematical model illustrating a novel approach to the modelling of language processes. The model shows how a neurophysiological theory of lexical processing relying on Hebbian association and neural assemblies can directly account for a variety of effects previously observed in behavioural experiments. We develop two information-theoretical measures of the distribution of usages of a morpheme or word, and use them to predict responses in three visual lexical decision datasets investigating inflectional morphology and polysemy. Our model offers a neurophysiological basis for the effects of morpho-semantic neighbourhoods. These results demonstrate how distributed patterns of activation naturally result in the arisal of symbolic structures. We conclude by arguing that the modelling framework exemplified here, is a powerful tool for integrating behavioural and neurophysiological results

    Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks

    Get PDF
    Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Conflict resolution and learning probability matching in a neural cell-assembly architecture

    Get PDF
    Donald Hebb proposed a hypothesis that specialised groups of neurons, called cell-assemblies (CAs), form the basis for neural encoding of symbols in the human mind. It is not clear, however, how CAs can be re-used and combined to form new representations as in classical symbolic systems. We demonstrate that Hebbian learning of synaptic weights alone is not adequate for all tasks, and that additional meta-control processes should be involved. We describe an earlier proposed architecture implementing an adaptive conflict resolution process between CAs, and then evaluate it by modelling the probability matching phenomenon in a classic two-choice task. The model and its results are discussed in view of mathematical theory of learning and existing cognitive architectures

    A model of probability matching in a two-choice task based on stochastic control of learning in neural cell-assemblies.

    Get PDF
    Donald Hebb proposed a hypothesis that specialised groups of neurons, called cell-assemblies (CAs), form the basis for neural encoding of symbols in the human mind. It is not clear, however, how CAs can be re-used and combined to form new representations as in classical symbolic systems. We demonstrate that Hebbian learning of synaptic weights alone is not adequate for all tasks, and that additional meta-control processes should be involved. We describe an earlier proposed architecture \cite{Belavkin08:_ecai08} implementing such a process, and then evaluate it by modelling the probability matching phenomenon in a classic two-choice task. The model and its results are discussed in view of mathematical theory of learning, and existing cognitive architectures as well as some hypotheses about neural functioning in the brain

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored

    Hebbian fast plasticity and working memory

    Full text link
    Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism.The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.Comment: 12 pages, 2 figures, 1 box, submitte

    A brain-inspired cognitive system that mimics the dynamics of human thought

    Get PDF
    In recent years, some impressive AI systems have been built that can play games and answer questions about large quantities of data. However, we are still a very long way from AI systems that can think and learn in a human-like way. We have a great deal of information about how the brain works and can simulate networks of hundreds of millions of neurons. So it seems likely that we could use our neuroscientific knowledge to build brain-inspired artificial intelligence that acts like humans on similar timescales. This paper describes an AI system that we have built using a brain-inspired network of artificial spiking neurons. On a word recognition and colour naming task our system behaves like human subjects on a similar timescale. In the longer term, this type of AI technology could lead to more flexible general purpose artificial intelligence and to more natural human-computer interaction

    Toward a further understanding of object feature binding: a cognitive neuroscience perspective.

    Get PDF
    The aim of this thesis is to lead to a further understanding of the neural mechanisms underlying object feature binding in the human brain. The focus is on information processing and integration in the visual system and visual shortterm memory. From a review of the literature it is clear that there are three major competing binding theories, however, none of these individually solves the binding problem satisfactorily. Thus the aim of this research is to conduct behavioural experimentation into object feature binding, paying particular attention to visual short-term memory. The behavioural experiment was designed and conducted using a within-subjects delayed responset ask comprising a battery of sixty-four composite objects each with three features and four dimensions in each of three conditions (spatial, temporal and spatio-temporal).Findings from the experiment,which focus on spatial and temporal aspects of object feature binding and feature proximity on binding errors, support the spatial theories on object feature binding, in addition we propose that temporal theories and convergence, through hierarchical feature analysis, are also involved. Because spatial properties have a dedicated processing neural stream, and temporal properties rely on limited capacity memory systems, memories for sequential information would likely be more difficult to accuratelyr ecall. Our study supports other studies which suggest that both spatial and temporal coherence to differing degrees,may be involved in object feature binding. Traditionally, these theories have purported to provide individual solutions, but this thesis proposes a novel unified theory of object feature binding in which hierarchical feature analysis, spatial attention and temporal synchrony each plays a role. It is further proposed that binding takes place in visual short-term memory through concerted and integrated information processing in distributed cortical areas. A cognitive model detailing this integrated proposal is given. Next, the cognitive model is used to inform the design and suggested implementation of a computational model which would be able to test the theory put forward in this thesis. In order to verify the model, future work is needed to implement the computational model.Thus it is argued that this doctoral thesis provides valuable experimental evidence concerning spatio-temporal aspects of the binding problem and as such is an additional building block in the quest for a solution to the object feature binding problem
    corecore