568 research outputs found

    Flexible Memory Networks

    Get PDF
    Networks of neurons in some brain areas are flexible enough to encode new memories quickly. Using a standard firing rate model of recurrent networks, we develop a theory of flexible memory networks. Our main results characterize networks having the maximal number of flexible memory patterns, given a constraint graph on the network's connectivity matrix. Modulo a mild topological condition, we find a close connection between maximally flexible networks and rank 1 matrices. The topological condition is H_1(X;Z)=0, where X is the clique complex associated to the network's constraint graph; this condition is generically satisfied for large random networks that are not overly sparse. In order to prove our main results, we develop some matrix-theoretic tools and present them in a self-contained section independent of the neuroscience context.Comment: Accepted to Bulletin of Mathematical Biology, 11 July 201

    Finite strain viscoplasticity with nonlinear kinematic hardening: phenomenological modeling and time integration

    Full text link
    This article deals with a viscoplastic material model of overstress type. The model is based on a multiplicative decomposition of the deformation gradient into elastic and inelastic part. An additional multiplicative decomposition of inelastic part is used to describe a nonlinear kinematic hardening of Armstrong-Frederick type. Two implicit time-stepping methods are adopted for numerical integration of evolution equations, such that the plastic incompressibility constraint is exactly satisfied. The first method is based on the tensor exponential. The second method is a modified Euler-Backward method. Special numerical tests show that both approaches yield similar results even for finite inelastic increments. The basic features of the material response, predicted by the material model, are illustrated with a series of numerical simulations.Comment: 29 pages, 7 figure

    Clique topology reveals intrinsic geometric structure in neural correlations

    Get PDF
    Detecting meaningful structure in neural activity and connectivity data is challenging in the presence of hidden nonlinearities, where traditional eigenvalue-based methods may be misleading. We introduce a novel approach to matrix analysis, called clique topology, that extracts features of the data invariant under nonlinear monotone transformations. These features can be used to detect both random and geometric structure, and depend only on the relative ordering of matrix entries. We then analyzed the activity of pyramidal neurons in rat hippocampus, recorded while the animal was exploring a two-dimensional environment, and confirmed that our method is able to detect geometric organization using only the intrinsic pattern of neural correlations. Remarkably, we found similar results during non-spatial behaviors such as wheel running and REM sleep. This suggests that the geometric structure of correlations is shaped by the underlying hippocampal circuits, and is not merely a consequence of position coding. We propose that clique topology is a powerful new tool for matrix analysis in biological settings, where the relationship of observed quantities to more meaningful variables is often nonlinear and unknown.Comment: 29 pages, 4 figures, 13 supplementary figures (last two authors contributed equally

    Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report

    Full text link
    Threshold-linear networks consist of simple units interacting in the presence of a threshold nonlinearity. Competitive threshold-linear networks have long been known to exhibit multistability, where the activity of the network settles into one of potentially many steady states. In this work, we find conditions that guarantee the absence of steady states, while maintaining bounded activity. These conditions lead us to define a combinatorial family of competitive threshold-linear networks, parametrized by a simple directed graph. By exploring this family, we discover that threshold-linear networks are capable of displaying a surprisingly rich variety of nonlinear dynamics, including limit cycles, quasiperiodic attractors, and chaos. In particular, several types of nonlinear behaviors can co-exist in the same network. Our mathematical results also enable us to engineer networks with multiple dynamic patterns. Taken together, these theoretical and computational findings suggest that threshold-linear networks may be a valuable tool for understanding the relationship between network connectivity and emergent dynamics.Comment: 12 pages, 9 figures. Preliminary repor

    Hippocampal representation of touch and sound guided behavior

    Get PDF
    Understanding the mechanisms by which sensory experiences are stored is a longstanding challenge for neuroscience. Previous work has described how the activity of neurons in the sensory cortex allows rats to discriminate the physical features of an object contacted with their whiskers. But to date there is no evidence about how neurons represent the behavioral significance of tactile stimuli, or how tactile events are encoded in memory. To investigate these issues, we recorded single-unit firing and local field potentials from the CA1 region of hippocampus while rats performed a tactile task. On each trial, the rat touched a plate with its whiskers and, after identifying the texture of the plate, turned to the left or right to obtain its reward. Two textures were associated with each reward location. Over one-third of the sampled neurons encoded the identity of the texture: their firing differed for the two stimuli associated with the same reward location. Over 80 percent of the sampled neurons encoded the behavioral significance of the contacted texture: their firing differed according to the reward location with which it was associated. Texture and reward location signals were present continuously, from the moment of stimulus contact through the entire period of reward collection. The local field potential power spectrum varied across the different phases of behavior, showing that signals of single-units were present within a sequence of different hippocampal states. The influence of location was examined by training rats to perform the same task in different positions within the room. The responses of neurons to a given stimulus in different locations were independent. This was not the case for reward location signals: neurons that carried a signal in one location were more likely to carry a signal in the other location. In summary, during a touch-guided behavior, neurons of the CA1 region represent both non spatial (texture identity) and spatial (reward location) events. Additional experiments were carried out, on another set of rats, to generalize some of the above findings from the tactile to the auditory modality. On each trial, the rat leaned into the gap and heard one of four sounds which were distributed along a vowel continuum from "A" to "I". After identifying the sound, the rat turned to the left or right to obtain its reward. Two sounds were associated with each reward location, and the experiment was repeated on 2 platforms. As in the tactile task, more than 80 percent of neurons represented reward location and more than 25 percent of neurons represented the identity of the sound (the vowel). The role of context on the stimulus and reward location signals was the same as in the tactile experiments. Representations of sounds were independent across 2 platforms but the representations of reward location were not: neurons that carried a signal in one location were more likely to carry a signal in the other location. These responses were absent during passive listening to the sounds

    The combinatorial code and the graph rules of Dale networks

    Full text link
    We describe the combinatorics of equilibria and steady states of neurons in threshold-linear networks that satisfy the Dale's law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersection-complete and is therefore a convex code, as is common in some sensory systems in the brain.Comment: 22 pages, 4 figures, added discussion section, corrected typos, expanded the background on convex code
    corecore