13 research outputs found

    Bounded Optimal Exploration in MDP

    Full text link
    Within the framework of probably approximately correct Markov decision processes (PAC-MDP), much theoretical work has focused on methods to attain near optimality after a relatively long period of learning and exploration. However, practical concerns require the attainment of satisfactory behavior within a short period of time. In this paper, we relax the PAC-MDP conditions to reconcile theoretically driven exploration methods and practical needs. We propose simple algorithms for discrete and continuous state spaces, and illustrate the benefits of our proposed relaxation via theoretical analyses and numerical examples. Our algorithms also maintain anytime error bounds and average loss bounds. Our approach accommodates both Bayesian and non-Bayesian methods.Comment: In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI), 201

    An efficient approach to model-based hierarchical reinforcement learning

    Get PDF
    National Research Foundation (NRF) Singapore under SMART and Future Mobility; Ministry of Education, Singapore under its Academic Research Funding Tier
    corecore