388,546 research outputs found
Learning in Real-Time Search: A Unifying Framework
Real-time search methods are suited for tasks in which the agent is
interacting with an initially unknown environment in real time. In such
simultaneous planning and learning problems, the agent has to select its
actions in a limited amount of time, while sensing only a local part of the
environment centered at the agents current location. Real-time heuristic search
agents select actions using a limited lookahead search and evaluating the
frontier states with a heuristic function. Over repeated experiences, they
refine heuristic values of states to avoid infinite loops and to converge to
better solutions. The wide spread of such settings in autonomous software and
hardware agents has led to an explosion of real-time search algorithms over the
last two decades. Not only is a potential user confronted with a hodgepodge of
algorithms, but he also faces the choice of control parameters they use. In
this paper we address both problems. The first contribution is an introduction
of a simple three-parameter framework (named LRTS) which extracts the core
ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*,
SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they
are unified and extended with additional features. Second, we prove
completeness and convergence of any algorithm covered by the LRTS framework.
Third, we prove several upper-bounds relating the control parameters and
solution quality. Finally, we analyze the influence of the three control
parameters empirically in the realistic scalable domains of real-time
navigation on initially unknown maps from a commercial role-playing game as
well as routing in ad hoc sensor networks
Simulating the use of macro-actions through action reordering
The use of macro-actions in planning introduces a trade-off.. Macro-actions can offer search guidance by suggesting sequences of actions; but can potentially make search more expensive by increasing the branching factor. In this paper we present a technique for simulating the use of macro actions by altering the order in which actions are considered for application during enforced hill-climbing search. Actions are ordered based on the number of times they have occurred, in past solution plans, following the last action added to the plan. We demonstrate that the action-reordering technique used can offer improved search performance without the negative performance impacts often observed when using macro-actions
Sampling-Based Methods for Factored Task and Motion Planning
This paper presents a general-purpose formulation of a large class of
discrete-time planning problems, with hybrid state and control-spaces, as
factored transition systems. Factoring allows state transitions to be described
as the intersection of several constraints each affecting a subset of the state
and control variables. Robotic manipulation problems with many movable objects
involve constraints that only affect several variables at a time and therefore
exhibit large amounts of factoring. We develop a theoretical framework for
solving factored transition systems with sampling-based algorithms. The
framework characterizes conditions on the submanifold in which solutions lie,
leading to a characterization of robust feasibility that incorporates
dimensionality-reducing constraints. It then connects those conditions to
corresponding conditional samplers that can be composed to produce values on
this submanifold. We present two domain-independent, probabilistically complete
planning algorithms that take, as input, a set of conditional samplers. We
demonstrate the empirical efficiency of these algorithms on a set of
challenging task and motion planning problems involving picking, placing, and
pushing
- ā¦