8 research outputs found

    Local Goals Driven Hierarchical Reinforcement Learning

    Get PDF
    * This research was partially supported by the Latvian Science Foundation under grant No.02-86d.Efficient exploration is of fundamental importance for autonomous agents that learn to act. Previous approaches to exploration in reinforcement learning usually address exploration in the case when the environment is fully observable. In contrast, the current paper, like the previous paper [Pch2003], studies the case when the environment is only partially observable. One additional difficulty is considered – complex temporal dependencies. In order to overcome this additional difficulty a new hierarchical reinforcement learning algorithm is proposed. The learning algorithm exploits a very simple learning principle, similar to Q-learning, except the lookup table has one more variable – the currently selected goal. Additionally, the algorithm uses the idea of internal reward for achieving hard-to-reach states [Pch2003]. The proposed learning algorithm is experimentally investigated in partially observable maze problems where it shows a robust ability to learn a good policy

    General Aspects of Constructing an Autonomous Adaptive Agent

    Get PDF
    There are a great deal of approaches in artificial intelligence, some of them also coming from biology and neirophysiology. In this paper we are making a review, discussing many of them, and arranging our discussion around the autonomous agent research. We highlight three aspect in our classification: type of abstraction applied for representing agent knowledge, the implementation of hypothesis processing mechanism, allowed degree of freedom in behaviour and self-organizing. Using this classification many approaches in artificial intelligence are evaluated. Then we summarize all discussed ideas and propose a series of general principles for building an autonomous adaptive agent

    International Journal "Information Theories & Applications " Vol.11 GENERAL ASPECTS OF CONSTRUCTING AN AUTONOMOUS ADAPTIVE AGENT

    No full text
    Abstract: There are a great deal of approaches in artificial intelligence, some of them also coming from biology and neirophysiology. In this paper we are making a review, discussing many of them, and arranging our discussion around the autonomous agent research. We highlight three aspect in our classification: type of abstraction applied for representing agent knowledge, the implementation of hypothesis processing mechanism, allowed degree of freedom in behaviour and self-organizing. Using this classification many approaches in artificial intelligence are evaluated. Then we summarize all discussed ideas and propose a series of general principles for building an autonomous adaptive agent

    Efficient Exploration In Reinforcement Learning Based on Suffix Memory

    No full text
    Reinforcement learning addresses the question of how an autonomous agent can learn to choose optimal actions to achieve its goals. Efficient exploration is of fundamental importance for autonomous agents that learn to act. Previous approaches to exploration in reinforcement learning usually address exploration in the case when the environment is fully observable. In contrast, we study the case when the environment is only partially observable. We consider different exploration techniques applied to the learning algorithm "Utile Suffix Memory", and, in addition, discuss an adaptive fringe depth. Experimental results in a partially observable maze show that exploration techniques have serious impact on performance of learning algorithm
    corecore