6 research outputs found

    A neural cognitive architecture

    Get PDF
    It is difficult to study the mind, but cognitive architectures are one tool. As the mind emerges from the behaviour of the brain, neuropsychological methods are another method to study the mind, though a rather indirect method. A cognitive architecture that is implemented in spiking neurons is a method of studying the mind that can use neuropsychological evidence directly. A neural cognitive architecture, based on rule based systems and associative memory, can be readily implemented, and would provide a good bridge between standard cognitive architectures, such as \Soar, and neuropsychology. This architecture could be implemented in spiking neurons, and made available via the Human Brain Project, which provides a good collaborative environment. The architecture could be readily extended to use spiking neurons for subsystems, such as spatial reasoning, and could evolve over time toward a complete architecture. The theory behind this architecture could evolve over time. Simplifying assumptions, made explicit, such as those behind the rule based system, could gradually be replaced by more neuropsychologically accurate behaviour. The overall task of collaborative architecture development would be eased by direct evidence of the actual neural cognitive architectures in human brains. While the initial architecture is biologically inspired, the ultimate goal is a biological cognitive architecture

    Changes in the Striatal Network Connectivity in Parkinsonian and Dyskinetic Rodent Models

    Get PDF
    In Parkinson’s disease, there is a loss of dopaminergic innervation in the basal ganglia. The lack of dopamine produces substantial changes in neural plasticity and generates pathological activity patterns between basal ganglia nuclei. The treatment to relieve Parkinsonism is the administration of levodopa. However, the treatment produces dyskinesia. The question to answer is how the interactions between neurons change in the brain microcircuits under these pathological conditions. Calcium imaging is a way to record the activity of dozens of neurons simultaneously with single-cell resolution in brain slices from rodents. We studied these interactions in the striatum, since it is the nucleus of the basal ganglia that receives the major dopaminergic innervation. We used network analysis, where each active neuron is taken as a node and its coactivity with other neurons is taken as its functional connections. The network obtained represents the functional connectome of the striatal microcircuit, which can be characterized with a small set of parameters taken from graph theory. We then quantify the pathological changes at the functional histological scale and the differences between normal and pathological conditions

    Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search

    Get PDF
    This article describes how an abstract framework for perception and cognition may be realised in terms of neural mechanisms and neural processing. This framework — called information compression by multiple alignment, unification and search (ICMAUS) — has been developed in previous research as a generalized model of any system for processing information, either natural or artificial. It has a range of applications including the analysis and production of natural language, unsupervised inductive learning, recognition of objects and patterns, probabilistic reasoning, and others. The proposals in this article may be seen as an extension and development of Hebb’s (1949) concept of a ‘cell assembly’. The article describes how the concept of ‘pattern’ in the ICMAUS framework may be mapped onto a version of the cell assembly concept and the way in which neural mechanisms may achieve the effect of ‘multiple alignment’ in the ICMAUS framework. By contrast with the Hebbian concept of a cell assembly, it is proposed here that any one neuron can belong in one assembly and only one assembly. A key feature of present proposals, which is not part of the Hebbian concept, is that any cell assembly may contain ‘references’ or ‘codes’ that serve to identify one or more other cell assemblies. This mechanism allows information to be stored in a compressed form, it provides a robust mechanism by which assemblies may be connected to form hierarchies and other kinds of structure, it means that assemblies can express abstract concepts, and it provides solutions to some of the other problems associated with cell assemblies. Drawing on insights derived from the ICMAUS framework, the article also describes how learning may be achieved with neural mechanisms. This concept of learning is significantly different from the Hebbian concept and appears to provide a better account of what we know about human learning

    A few strong connections: optimizing information retention in neuronal avalanches

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>How living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks.</p> <p>Results</p> <p>Here, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns.</p> <p>Conclusions</p> <p>We conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.</p
    corecore