299,203 research outputs found

    Look-ahead with mini-bucket heuristics for MPE

    Get PDF
    The paper investigates the potential of look-ahead in the con-text of AND/OR search in graphical models using the Mini-Bucket heuristic for combinatorial optimization tasks (e.g., MAP/MPE or weighted CSPs). We present and analyze the complexity of computing the residual (a.k.a Bellman update) of the Mini-Bucket heuristic and show how this can be used to identify which parts of the search space are more likely to benefit from look-ahead and how to bound its overhead. We also rephrase the look-ahead computation as a graphical model, to facilitate structure exploiting inference schemes. We demonstrate empirically that augmenting Mini-Bucket heuristics by look-ahead is a cost-effective way of increasing the power of Branch-And-Bound search.Postprint (published version

    A Lagrangian Policy for Optimal Energy Storage Control

    Full text link
    This paper presents a millisecond-level look-ahead control algorithm for energy storage with constant space complexity and worst-case linear run-time complexity. The algorithm connects the optimal control with the Lagrangian multiplier associated with the state-of-charge constraint. It is compared to solving look-ahead control using a state-of-the-art convex optimization solver. Simulation results show that both methods obtain the same control result, while the proposed algorithm runs up to 100,000 times faster and solves most problems within one millisecond. The theoretical results from developing this algorithm also provide key insights into designing optimal energy storage control schemes at the centralized system level as well as under distributed settings

    Dynamic real-time hierarchical heuristic search for pathfinding.

    Get PDF
    Movement of Units in Real-Time Strategy (RTS) Games is a non-trivial and challenging task mainly due to three factors which are constraints on CPU and memory usage, dynamicity of the game world, and concurrency. In this paper, we are focusing on finding a novel solution for solving the pathfinding problem in RTS Games for the units which are controlled by the computer. The novel solution combines two AI Planning approaches: Hierarchical Task Network (HTN) and Real-Time Heuristic Search (RHS). In the proposed solution, HTNs are used as a dynamic abstraction of the game map while RHS works as planning engine with interleaving of plan making and action executions. The article provides algorithmic details of the model while the empirical details of the model are obtained by using a real-time strategy game engine called ORTS (Open Real-time Strategy). The implementation of the model and its evaluation methods are in progress however the results of the automatic HTN creation are obtained for a small scale game map

    Model Learning for Look-ahead Exploration in Continuous Control

    Full text link
    We propose an exploration method that incorporates look-ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.Comment: This is a pre-print of our paper which is accepted in AAAI 201
    • …
    corecore