21 research outputs found
Automated sequence and motion planning for robotic spatial extrusion of 3D trusses
While robotic spatial extrusion has demonstrated a new and efficient means to
fabricate 3D truss structures in architectural scale, a major challenge remains
in automatically planning extrusion sequence and robotic motion for trusses
with unconstrained topologies. This paper presents the first attempt in the
field to rigorously formulate the extrusion sequence and motion planning (SAMP)
problem, using a CSP encoding. Furthermore, this research proposes a new
hierarchical planning framework to solve the extrusion SAMP problems that
usually have a long planning horizon and 3D configuration complexity. By
decoupling sequence and motion planning, the planning framework is able to
efficiently solve the extrusion sequence, end-effector poses, joint
configurations, and transition trajectories for spatial trusses with
nonstandard topologies. This paper also presents the first detailed computation
data to reveal the runtime bottleneck on solving SAMP problems, which provides
insight and comparing baseline for future algorithmic development. Together
with the algorithmic results, this paper also presents an open-source and
modularized software implementation called Choreo that is machine-agnostic. To
demonstrate the power of this algorithmic framework, three case studies,
including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure
Sampling-Based Methods for Factored Task and Motion Planning
This paper presents a general-purpose formulation of a large class of
discrete-time planning problems, with hybrid state and control-spaces, as
factored transition systems. Factoring allows state transitions to be described
as the intersection of several constraints each affecting a subset of the state
and control variables. Robotic manipulation problems with many movable objects
involve constraints that only affect several variables at a time and therefore
exhibit large amounts of factoring. We develop a theoretical framework for
solving factored transition systems with sampling-based algorithms. The
framework characterizes conditions on the submanifold in which solutions lie,
leading to a characterization of robust feasibility that incorporates
dimensionality-reducing constraints. It then connects those conditions to
corresponding conditional samplers that can be composed to produce values on
this submanifold. We present two domain-independent, probabilistically complete
planning algorithms that take, as input, a set of conditional samplers. We
demonstrate the empirical efficiency of these algorithms on a set of
challenging task and motion planning problems involving picking, placing, and
pushing
PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning
Many planning applications involve complex relationships defined on
high-dimensional, continuous variables. For example, robotic manipulation
requires planning with kinematic, collision, visibility, and motion constraints
involving robot configurations, object poses, and robot trajectories. These
constraints typically require specialized procedures to sample satisfying
values. We extend PDDL to support a generic, declarative specification for
these procedures that treats their implementation as black boxes. We provide
domain-independent algorithms that reduce PDDLStream problems to a sequence of
finite PDDL problems. We also introduce an algorithm that dynamically balances
exploring new candidate plans and exploiting existing ones. This enables the
algorithm to greedily search the space of parameter bindings to more quickly
solve tightly-constrained problems as well as locally optimize to produce
low-cost solutions. We evaluate our algorithms on three simulated robotic
planning domains as well as several real-world robotic tasks.Comment: International Conference on Automated Planning and Scheduling (ICAPS)
202
Active model learning and diverse action sampling for task and motion planning
The objective of this work is to augment the basic abilities of a robot by
learning to use new sensorimotor primitives to enable the solution of complex
long-horizon problems. Solving long-horizon problems in complex domains
requires flexible generative planning that can combine primitive abilities in
novel combinations to solve problems as they arise in the world. In order to
plan to combine primitive actions, we must have models of the preconditions and
effects of those actions: under what circumstances will executing this
primitive achieve some particular effect in the world?
We use, and develop novel improvements on, state-of-the-art methods for
active learning and sampling. We use Gaussian process methods for learning the
conditions of operator effectiveness from small numbers of expensive training
examples collected by experimentation on a robot. We develop adaptive sampling
methods for generating diverse elements of continuous sets (such as robot
configurations and object poses) during planning for solving a new task, so
that planning is as efficient as possible. We demonstrate these methods in an
integrated system, combining newly learned models with an efficient
continuous-space robot task and motion planner to learn to solve long horizon
problems more efficiently than was previously possible.Comment: Proceedings of the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Madrid, Spain.
https://www.youtube.com/playlist?list=PLoWhBFPMfSzDbc8CYelsbHZa1d3uz-W_
Sequence-Based Plan Feasibility Prediction for Efficient Task and Motion Planning
Robots planning long-horizon behavior in complex environments must be able to
quickly reason about the impact of the environment's geometry on what plans are
feasible, i.e., whether there exist action parameter values that satisfy all
constraints on a candidate plan. In tasks involving articulated and movable
obstacles, typical Task and Motion Planning (TAMP) algorithms spend most of
their runtime attempting to solve unsolvable constraint satisfaction problems
imposed by infeasible plan skeletons. We developed a novel Transformer-based
architecture, PIGINet, that predicts plan feasibility based on the initial
state, goal, and candidate plans, fusing image and text embeddings with state
features. The model sorts the plan skeletons produced by a TAMP planner
according to the predicted satisfiability likelihoods. We evaluate the runtime
of our learning-enabled TAMP algorithm on several distributions of kitchen
rearrangement problems, comparing its performance to that of non-learning
baselines and algorithm ablations. Our experiments show that PIGINet
substantially improves planning efficiency, cutting down runtime by 80% on
average on pick-and-place problems with articulated obstacles. It also achieves
zero-shot generalization to problems with unseen object categories thanks to
its visual encoding of objects
FFRob: An Efficient Heuristic for Task and Motion Planning
Manipulation problemsinvolvingmany objects present substantial challenges for motion planning algorithms due to the high dimensionality and multi-modality of the search space. Symbolic task planners can efficiently construct plans involving many entities but cannot incorporate the constraints from geometry and kinematics. In this paper, we show how to extend the heuristic ideas from one of the most successful symbolic planners in recent years, the FastForward (FF) planner, to motion planning, and to compute it efficiently. We use a multi-query roadmap structure that can be conditionalized to model different placements of movable objects. The resulting tightly integrated planner is simple and performs efficiently in a collection of tasks involving manipulation of many objects.National Science Foundation (U.S.) (Grant No. 019868)United States. Office of Naval Research. Multidisciplinary University Research Initiative (grant N00014-09-1-1051)United States. Air Force. Office of Scientific Research (grant AOARD-104135)Singapore. Ministry of Educatio
Scalable and Probabilistically Complete Planning for Robotic Spatial Extrusion
There is increasing demand for automated systems that can fabricate 3D
structures. Robotic spatial extrusion has become an attractive alternative to
traditional layer-based 3D printing due to a manipulator's flexibility to print
large, directionally-dependent structures. However, existing extrusion planning
algorithms require a substantial amount of human input, do not scale to large
instances, and lack theoretical guarantees. In this work, we present a rigorous
formalization of robotic spatial extrusion planning and provide several
efficient and probabilistically complete planning algorithms. The key planning
challenge is, throughout the printing process, satisfying both stiffness
constraints that limit the deformation of the structure and geometric
constraints that ensure the robot does not collide with the structure. We show
that, although these constraints often conflict with each other, a greedy
backward state-space search guided by a stiffness-aware heuristic is able to
successfully balance both constraints. We empirically compare our methods on a
benchmark of over 40 simulated extrusion problems. Finally, we apply our
approach to 3 real-world extrusion problems
DiMSam: Diffusion Models as Samplers for Task and Motion Planning under Partial Observability
Task and Motion Planning (TAMP) approaches are effective at planning
long-horizon autonomous robot manipulation. However, because they require a
planning model, it can be difficult to apply them to domains where the
environment and its dynamics are not fully known. We propose to overcome these
limitations by leveraging deep generative modeling, specifically diffusion
models, to learn constraints and samplers that capture these
difficult-to-engineer aspects of the planning model. These learned samplers are
composed and combined within a TAMP solver in order to find action parameter
values jointly that satisfy the constraints along a plan. To tractably make
predictions for unseen objects in the environment, we define these samplers on
low-dimensional learned latent embeddings of changing object state. We evaluate
our approach in an articulated object manipulation domain and show how the
combination of classical TAMP, generative learning, and latent embeddings
enables long-horizon constraint-based reasoning