2 research outputs found

    Continuous Attractors in Recurrent Neural Networks and Phase Space Learning

    No full text
    Recurrent networks can be used as associative memories where the stored memories represent fixed points to which the dynamics of the network converges. These networks, however, also can present continuous attractors, as limit cycles and chaotic attractors. The use of these attractors in recurrent networks for the construction of associative memories is argued. Here, we provide a training algorithm for continuous attractors and present some numerical results of the learning method which involves genetic algorithms. Continuous attractors. A simple recurrent neural network can exhibit a diversity of dynamic behaviors. This diversity, which includes unstable states and continuous attractors, are particularly undesirable in the case associative memories as fixed attractors. Otherwise continuous attractors may be convenient for the construction of memories associated with patterns with continuous variability [1]. However, the convergence of the learning, in general, is not guaranteed on recurrent neural networks, and in particular on the learning of continuous attractors. Phase space learning. A learning based on adjusting the position of attractors in a phase space eliminates the dimension time on the learning. This makes easier the training: we need adjust the net parameter values in order to obtain the same phase portrait of a specific dynamic system. Any algorithm can be used to adjust the net parameter values. Here we choose a genetic algorithm. Genetic algorithm. The use of genetic algorithms for our training was motivated by a practical reason. The distance between the points of the desired orbit and the network output, supplies the function of fitness to be minimized. The techniques of variable mutation rate and élitism allowed to improve the efficiency of our approximations. Conclusion. Our method can approximate closed orbits in R 2 by using an one-hidden-layer net. The viabilit

    Continuous Attractors in Recurrent Neural Networks and Phase Space Learning

    No full text
    Recurrent networks can be used as associative memories where the stored memories represent fixed points to which the dynamics of the network converges. These networks, however, also can present continuous attractors, as limit cycles and chaotic attractors. The use of these attractors in recurrent networks for the construction of associative memories is argued. Here, we provide a training algorithm for continuous attractors and present some numerical results of the learning method which involves genetic algorithms. Continuous attractors. A simple recurrent neural network can exhibit a diversity of dynamic behaviors. This diversity, which includes unstable states and continuous attractors, are particularly undesirable in the case associative memories as fixed attractors
    corecore