10,918 research outputs found
Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems
This paper was motivated by the problem of how to make robots fuse and
transfer their experience so that they can effectively use prior knowledge and
quickly adapt to new environments. To address the problem, we present a
learning architecture for navigation in cloud robotic systems: Lifelong
Federated Reinforcement Learning (LFRL). In the work, We propose a knowledge
fusion algorithm for upgrading a shared model deployed on the cloud. Then,
effective transfer learning methods in LFRL are introduced. LFRL is consistent
with human cognitive science and fits well in cloud robotic systems.
Experiments show that LFRL greatly improves the efficiency of reinforcement
learning for robot navigation. The cloud robotic system deployment also shows
that LFRL is capable of fusing prior knowledge. In addition, we release a cloud
robotic navigation-learning website based on LFRL
A Cognitive Architecture Based on a Learning Classifier System with Spiking Classifiers
© 2015, Springer Science+Business Media New York. Learning classifier systems (LCS) are population-based reinforcement learners that were originally designed to model various cognitive phenomena. This paper presents an explicitly cognitive LCS by using spiking neural networks as classifiers, providing each classifier with a measure of temporal dynamism. We employ a constructivist model of growth of both neurons and synaptic connections, which permits a genetic algorithm to automatically evolve sufficiently-complex neural structures. The spiking classifiers are coupled with a temporally-sensitive reinforcement learning algorithm, which allows the system to perform temporal state decomposition by appropriately rewarding “macro-actions”, created by chaining together multiple atomic actions. The combination of temporal reinforcement learning and neural information processing is shown to outperform benchmark neural classifier systems, and successfully solve a robotic navigation task
Covert Perceptual Capability Development
In this paper, we propose a model to develop
robots’ covert perceptual capability using reinforcement learning. Covert perceptual behavior is treated as action selected by a motivational system. We apply this model to
vision-based navigation. The goal is to enable
a robot to learn road boundary type. Instead
of dealing with problems in controlled environments with a low-dimensional state space,
we test the model on images captured in non-stationary environments. Incremental Hierarchical Discriminant Regression is used to
generate states on the fly. Its coarse-to-fine
tree structure guarantees real-time retrieval
in high-dimensional state space. K Nearest-Neighbor strategy is adopted to further reduce training time complexity
- …