Article thumbnail

Novelty and Reinforcement Learning in the Value System of Developmental Robots

By Xiao Huang and John Weng

Abstract

The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work reported here, a low level value system is modeled and implemented. It simulates the non-associative animal learning mechanism known as habituation effect. Reinforcement learning is also integrated with novelty. Experimental results show that the proposed value system works as designed in a study of robot viewing angle selection

Topics: Statistical Models, Machine Learning, Artificial Intelligence, Robotics
Publisher: Lund University Cognitive Studies
Year: 2002
OAI identifier: oai:cogprints.org:2511

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

Suggested articles

Citations

  1. (1997). A developmental perspective to neural models of intelligence and learning.
  2. (1996). A framework for mesencephalic dopamine systems based on predictive hebbian learning.
  3. (1998). Behavioral constraints in the development of neural properties: A cortical model embedded in a real-world device.
  4. (1993). Cognitive Development. Prentice-Hall,
  5. (1990). Habitutation, sensitization and infant visual attention.
  6. (1999). Hierarchical discriminat regression.
  7. (2000). Multiple reward signals in the brain.
  8. (1997). Registration of neural maps through valuedependent learning: Modeling the alighment of auditory and visual maps in the barn owl's pitic tecturm.
  9. (1952). The Origins of Intelligence in Children.
  10. (1998). The Principles of learning and behavior.