28,813 research outputs found

    Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown distinct advantages, e.g., solving memory-dependent tasks and meta-learning. However, little effort has been spent on improving RNN architectures and on understanding the underlying neural mechanisms for performance gain. In this paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical results show that the network can autonomously learn to abstract sub-goals and can self-develop an action hierarchy using internal dynamics in a challenging continuous control task. Furthermore, we show that the self-developed compositionality of the network enhances faster re-learning when adapting to a new task that is a re-composition of previously learned sub-goals, than when starting from scratch. We also found that improved performance can be achieved when neural activities are subject to stochastic rather than deterministic dynamics

    Learning recurrent representations for hierarchical behavior modeling

    Get PDF
    We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules

    Neural-Network Vector Controller for Permanent-Magnet Synchronous Motor Drives: Simulated and Hardware-Validated Results

    Get PDF
    This paper focuses on current control in a permanentmagnet synchronous motor (PMSM). The paper has two main objectives: The first objective is to develop a neural-network (NN) vector controller to overcome the decoupling inaccuracy problem associated with conventional PI-based vector-control methods. The NN is developed using the full dynamic equation of a PMSM, and trained to implement optimal control based on approximate dynamic programming. The second objective is to evaluate the robust and adaptive performance of the NN controller against that of the conventional standard vector controller under motor parameter variation and dynamic control conditions by (a) simulating the behavior of a PMSM typically used in realistic electric vehicle applications and (b) building an experimental system for hardware validation as well as combined hardware and simulation evaluation. The results demonstrate that the NN controller outperforms conventional vector controllers in both simulation and hardware implementation
    • …
    corecore