27 research outputs found

    Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos

    Full text link
    In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed

    Learning Shapes Spontaneous Activity Itinerating over Memorized States

    Get PDF
    Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided

    Oscillation-Driven Memory Encoding, Maintenance, and Recall in an Entorhinal–Hippocampal Circuit Model

    No full text
    During the execution of working memory tasks, task-relevant information is processed by local circuits across multiple brain regions. How this multiarea computation is conducted by the brain remains largely unknown. To explore such mechanisms in spatial working memory, we constructed a neural network model involving parvalbumin-positive, somatostatin-positive, and vasoactive intestinal polypeptide-positive interneurons in the hippocampal CA1 and the superficial and deep layers of medial entorhinal cortex (MEC). Our model is based on a hypothesis that cholinergic modulations differently regulate information flows across CA1 and MEC at memory encoding, maintenance, and recall during delayed nonmatching-to-place tasks. In the model, theta oscillation coordinates the proper timing of interactions between these regions. Furthermore, the model predicts that MEC is engaged in decoding as well as encoding spatial memory, which we confirmed by experimental data analysis. Thus, our model accounts for the neurobiological characteristics of the cross-area information routing underlying working memory tasks

    Associative memory model with spontaneous neural activity

    No full text
    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies

    Dynamic Organization of Hierarchical Memories

    No full text
    <div><p>In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity.</p></div

    The matrix elements in the presence of the targets

    No full text
    <p><b> and inputs </b><b>.</b> The matrix elements , , , and plotted as functions of for <b>A</b>.  = (16, 0.01) in the R regime, <b>B</b>.  = (2.6, 0.01) in the boundary regime, and <b>C</b>.  = (1,0.5) in the NR regime. The same colors as those used in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002943#pcbi-1002943-g005" target="_blank">Fig. 5</a> are used here. The error bars represent the standard deviation.</p

    Embedding Responses in Spontaneous Neural Activity Shaped through Sequential Learning

    Get PDF
    <div><p>Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure. This spontaneous activity has also been shown to play a key role in the response to external stimuli. To better understand this role, we proposed a viewpoint, “memories-as-bifurcations,” that differs from the traditional “memories-as-attractors” viewpoint. Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input, known as a bifurcation in dynamical systems theory, wherein the input modifies the flow structure of the neural dynamics. Learning, then, is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input. Based on this novel viewpoint, we introduce in this paper an associative memory model with a sequential learning process. Using a simple Hebbian-type learning, the model is able to memorize a large number of input/output mappings. The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input, and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns. These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input, which thus increases the capacity for learning. This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals. In addition, the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in our previous study.</p> </div

    Temporal structure of spontaneous activity.

    No full text
    <p>A) Transition probability <i>P</i><sub><i>μν</i></sub> from the <i>ν</i>-th target to the <i>μ</i>-th target. We cannot compute the probability of self-visiting <i>P</i><sub><i>μμ</i></sub>, and set at 0, because we did not distinguish continuous stay of the neural state around a target from coming in-out-in the identical target. B) Transition probability <i>P</i><sub><i>ab</i></sub> from the category b to the category a. C) Transition time <i>T</i><sub><i>μν</i></sub> from the <i>ν</i>-th to the <i>μ</i>-th target which is averaged with <i>P</i><sub><i>μν</i></sub>. White tiles indicate that there is no transition and we cannot calculate the transition time. D) Transition time Tab from the target b to a. all of values are calculated from the spontaneous activity (0Materials and Methods” for the detailed.</p
    corecore