107 research outputs found

    Dynamics of Neural Networks with Continuous Attractors

    Full text link
    We investigate the dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of stationary states. We systematically explore how their neutral stability facilitates the tracking performance of a CANN, which is believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.Comment: 6 pages, 7 figures with 4 caption

    Dynamical model of sequential spatial memory: winnerless competition of patterns

    Full text link
    We introduce a new biologically-motivated model of sequential spatial memory which is based on the principle of winnerless competition (WLC). We implement this mechanism in a two-layer neural network structure and present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of pre-recorded sequences of spatial patterns.Comment: 4 pages, submitted to PR

    An associative network with spatially organized connectivity

    Full text link
    We investigate the properties of an autoassociative network of threshold-linear units whose synaptic connectivity is spatially structured and asymmetric. Since the methods of equilibrium statistical mechanics cannot be applied to such a network due to the lack of a Hamiltonian, we approach the problem through a signal-to-noise analysis, that we adapt to spatially organized networks. The conditions are analyzed for the appearance of stable, spatially non-uniform profiles of activity with large overlaps with one of the stored patterns. It is also shown, with simulations and analytic results, that the storage capacity does not decrease much when the connectivity of the network becomes short range. In addition, the method used here enables us to calculate exactly the storage capacity of a randomly connected network with arbitrary degree of dilution.Comment: 27 pages, 6 figures; Accepted for publication in JSTA

    СЛУЧАЙ ЛЕЙШМАНИОЗА В СТАЦИОНАРЕ, СЛОЖНОСТИ КЛИНИЧЕСКОЙ ДИАГНОСТИКИ

    Get PDF
    In the article presents a review of literature on Leishmaniasis. The disease is not endemic inMoscowregion, nevertheless doctors of all specializations should be alert to the development of the disease, especially in autumn due to extensive migration of population. The authors described basic forms of leishmaniasis, its clinical symptoms, problems of diagnosis and modern methods of treatment. The work also presents a clinical case, confirmed by serological tests and other methods.В статье представлены литературные данные по лейшманиозу — заболеванию, не являющемуся эндемичным в Московском регионе, тем не менее врачам всех специальностей следует иметь определенную настороженность в отношении развития данного заболевания, особенно в осенний период времени в связи с широкой миграцией населения. Описаны основные формы лейшманиоза, его клинические симптомы, отражены сложности диагностики, даны современные методы лечения. Также представлен клинический случай, подтвержденный серологическими исследованиями и другими методами

    Continuous Attractors with Morphed/Correlated Maps

    Get PDF
    Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task

    Emotion in the Common Model of Cognition

    Get PDF
    Emotions play an important role in human cognition and therefore need to be present in the Common Model of Cognition. In this paper, the emotion working group focuses on functional aspects of emotions and describes what we believe are the points of interactions with the Common Model of Cognition. The present paper should not be viewed as a consensus of the group but rather as a first attempt to extract common and divergent aspects of different models of emotions and how they relate to the Common Model of Cognition

    Theta-paced flickering between place-cell maps in the hippocampus

    Get PDF
    The ability to recall discrete memories is thought to depend on the formation of attractor states in recurrent neural networks. In such networks, representations can be reactivated reliably from subsets of the cues that were present when the memory was encoded, at the same time as interference from competing representations is minimized. Theoretical studies have pointed to the recurrent CA3 system of the hippocampus as a possible attractor network. Consistent with predictions from these studies, experiments have shown that place representations in CA3 and downstream CA1 tolerate small changes in the configuration of the environment but switch to uncorrelated representations when dissimilarities become larger. The kinetics supporting such network transitions, at the subsecond time scale, is poorly understood, however. Here we show that instantaneous transformation of the spatial context (\u2018teleportation\u2019) does not change the hippocampal representation all at once but is followed by temporary bistability in the discharge activity of CA3 ensembles. Rather than sliding through a continuum of intermediate activity states, the CA3 network undergoes a short period of competitive flickering between pre-formed representations for past and present environment, before settling on the latter. Network flickers are extremely fast, often with complete replacement of the active ensemble from one theta cycle to the next. Within individual cycles, segregation is stronger towards the end, when firing starts to decline, pointing to the theta cycle as a temporal unit for expression of attractor states in the hippocampus. Repetition of pattern-completion processes across successive theta cycles may facilitate error correction and enhance discriminative power in the presence of weak and ambiguous input cues

    Mathematical modelling and numerical simulation of the morphological development of neurons

    Get PDF
    BACKGROUND: The morphological development of neurons is a very complex process involving both genetic and environmental components. Mathematical modelling and numerical simulation are valuable tools in helping us unravel particular aspects of how individual neurons grow their characteristic morphologies and eventually form appropriate networks with each other. METHODS: A variety of mathematical models that consider (1) neurite initiation (2) neurite elongation (3) axon pathfinding, and (4) neurite branching and dendritic shape formation are reviewed. The different mathematical techniques employed are also described. RESULTS: Some comparison of modelling results with experimental data is made. A critique of different modelling techniques is given, leading to a proposal for a unified modelling environment for models of neuronal development. CONCLUSION: A unified mathematical and numerical simulation framework should lead to an expansion of work on models of neuronal development, as has occurred with compartmental models of neuronal electrical activity

    The spike-timing-dependent learning rule to encode spatiotemporal patterns in a network of spiking neurons

    Full text link
    We study associative memory neural networks based on the Hodgkin-Huxley type of spiking neurons. We introduce the spike-timing-dependent learning rule, in which the time window with the negative part as well as the positive part is used to describe the biologically plausible synaptic plasticity. The learning rule is applied to encode a number of periodical spatiotemporal patterns, which are successfully reproduced in the periodical firing pattern of spiking neurons in the process of memory retrieval. The global inhibition is incorporated into the model so as to induce the gamma oscillation. The occurrence of gamma oscillation turns out to give appropriate spike timings for memory retrieval of discrete type of spatiotemporal pattern. The theoretical analysis to elucidate the stationary properties of perfect retrieval state is conducted in the limit of an infinite number of neurons and shows the good agreement with the result of numerical simulations. The result of this analysis indicates that the presence of the negative and positive parts in the form of the time window contributes to reduce the size of crosstalk term, implying that the time window with the negative and positive parts is suitable to encode a number of spatiotemporal patterns. We draw some phase diagrams, in which we find various types of phase transitions with change of the intensity of global inhibition.Comment: Accepted for publication in Physical Review
    corecore