10 research outputs found

    A Step Towards Uncovering The Structure of Multistable Neural Networks

    Full text link
    We study the structure of multistable recurrent neural networks. The activation function is simplified by a nonsmooth Heaviside step function. This nonlinearity partitions the phase space into regions with different, yet linear dynamics. We derive how multistability is encoded within the network architecture. Stable states are identified by their semipositivity constraints on the synaptic weight matrix. The restrictions can be separated by their effects on the signs or the strengths of the connections. Exact results on network topology, sign stability, weight matrix factorization, pattern completion and pattern coupling are derived and proven. These may lay the foundation of more complex recurrent neural networks and neurocomputing.Comment: 33 pages, 9 figure

    A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements

    Full text link
    The computational properties of neural systems are often thought to be implemented in terms of their network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit (MSU) recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a state space representation of the dynamics, but would wish to have access to its governing equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, the approach is applied to MSU recordings from the rodent anterior cingulate cortex obtained during performance of a classical working memory task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover the relevant dynamics underlying observed neuronal time series, and directly link them to computational properties

    State-Dependent Computation Using Coupled Recurrent Networks

    Get PDF
    Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit

    Persistence and storage of activity patterns in spiking recurrent cortical networks: modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine

    Get PDF
    Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970's showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network stabilizes

    Dynamics analysis and applications of neural networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Training issues and learning algorithms for feedforward and recurrent neural networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    Multistability analysis for recurrent neural networks with unsaturating piecewise linear transfer functions

    No full text
    10.1162/089976603321192112Neural Computation153639-66
    corecore