1 research outputs found

    A Concept Filtering Approach for Diverse Density to Discover Subgoals in Reinforcement Learning

    No full text
    In the reinforcement learning context, subgoal discovery methods aim to find bottlenecks in problem state space so that the problem can naturally be decomposed into smaller subproblems. In this paper, we propose a concept filtering method that extends an existing subgoal discovery method, namely diverse density, to be used for both fully and partially observable RL problems. The proposed method is successful in discovering useful subgoals with the help of multiple instance learning. Compared to the original algorithm, the resulting approach runs significantly faster without sacrificing the solution quality. Moreover, it can effectively be employed to find observational bottlenecks of problems with perceptually aliased states
    corecore