Skip to main content
Article thumbnail
Location of Repository

Presented at the AAAI Fall Symposium on Computational Approaches to Representation Change in Learning and Devlopment, 2007. Automatic Development from Pixel-level Representation to Action-level Representation in Robot Navigation



0.4pt0pt Many important real-world robotic tasks have high diameter, that is, their solution requires a large number of primitive actions by the robot. For example, they may require navigating to distant locations using primitive motor control commands. In addition, modern robots are endowed with rich, high-dimensional sensory systems, providing measurements of a continuous environment. Reinforcement learning (RL) has shown promise as a method for automatic learning of robot behavior, but current methods work best on low-diameter, low-dimensional tasks. Because of this problem, the success of RL on real-world tasks still depends on human analysis of the robot, environment, and task to provide a useful sensorimotor representation to the learning agent. A new method, Self-Organizing Distinctive-state Abstractio

Year: 2009
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.