1 research outputs found

    Stabilizing Value Function Approximation with the BFBP Algorithm

    No full text
    We address the problem of non-convergence of online reinforcement learning algorithms (e.g., Q learning and SARSA()) by adopting an incremental-batch approach that separates the exploration process from the function tting process. Our BFBP (Batch Fit to Best Paths) algorithm alternates between an exploration phase (during which trajectories are generated to try to nd fragments of the optimal policy) and a function tting phase (during which a function approximator is t to the best known paths from start states to terminal states). An advantage of this approach is that batch value-function tting is a global process, which allows it to address the tradeos in function approximation that cannot be handled by local, online algorithms. This approach was pioneered by Boyan and Moore with their GrowSupport and ROUT algorithms
    corecore