2 research outputs found

    Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

    Get PDF
    We investigate the exploration of an unknown environment when no reward function is provided. Building on the incremental exploration setting introduced by Lim and Auer [1], we define the objective of learning the set of ϵ\epsilon-optimal goal-conditioned policies attaining all states that are incrementally reachable within LL steps (in expectation) from a reference state s0s_0. In this paper, we introduce a novel model-based approach that interleaves discovering new states from s0s_0 and improving the accuracy of a model estimate that is used to compute goal-conditioned policies to reach newly discovered states. The resulting algorithm, DisCo, achieves a sample complexity scaling as O~(L5SL+ϵΓL+ϵAϵ2)\tilde{O}(L^5 S_{L+\epsilon} \Gamma_{L+\epsilon} A \epsilon^{-2}), where AA is the number of actions, SL+ϵS_{L+\epsilon} is the number of states that are incrementally reachable from s0s_0 in L+ϵL+\epsilon steps, and ΓL+ϵ\Gamma_{L+\epsilon} is the branching factor of the dynamics over such states. This improves over the algorithm proposed in [1] in both ϵ\epsilon and LL at the cost of an extra ΓL+ϵ\Gamma_{L+\epsilon} factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an ϵ/cmin\epsilon/c_{\min}-optimal policy for any cost-sensitive shortest-path problem defined on the LL-reachable states with minimum cost cminc_{\min}. Finally, we report preliminary empirical results confirming our theoretical findings.Comment: NeurIPS 202
    corecore