7,881 research outputs found

    Self-clustering recurrent networks

    Get PDF
    Recurrent neural networks have recently been shown to have the ability to learn finite state automata (FSA’s) from examples. In this paper it is shown, based on empirical analyses, that second-order networks which are trained to learn FSA’s tend to form discrete clusters as the state representation in the hidden unit activation space. This observation is used to define ‘self-clustering’ networks which automatically extract discrete state machines from the learned network. However, the problem of instability on long test strings is a factor in the generalization performance of recurrent networks - in essence, because of the analog nature of the state representation, the network gradually “forgets” where the individual state regions are. To address this problem a new network structure is introduced whereby the network uses quantization in the feedback path to force the learning of discrete states. Experimental results show that the new method learns FSA’s just as well as existing methods in the literature but with the significant advantage of being stable on test strings of arbitrary length

    Self-clustering recurrent networks

    Get PDF
    Recurrent neural networks have recently been shown to have the ability to learn finite state automata (FSA’s) from examples. In this paper it is shown, based on empirical analyses, that second-order networks which are trained to learn FSA’s tend to form discrete clusters as the state representation in the hidden unit activation space. This observation is used to define ‘self-clustering’ networks which automatically extract discrete state machines from the learned network. However, the problem of instability on long test strings is a factor in the generalization performance of recurrent networks - in essence, because of the analog nature of the state representation, the network gradually “forgets” where the individual state regions are. To address this problem a new network structure is introduced whereby the network uses quantization in the feedback path to force the learning of discrete states. Experimental results show that the new method learns FSA’s just as well as existing methods in the literature but with the significant advantage of being stable on test strings of arbitrary length

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    corecore