15,879 research outputs found
Online, interactive user guidance for high-dimensional, constrained motion planning
We consider the problem of planning a collision-free path for a
high-dimensional robot. Specifically, we suggest a planning framework where a
motion-planning algorithm can obtain guidance from a user. In contrast to
existing approaches that try to speed up planning by incorporating experiences
or demonstrations ahead of planning, we suggest to seek user guidance only when
the planner identifies that it ceases to make significant progress towards the
goal. Guidance is provided in the form of an intermediate configuration
, which is used to bias the planner to go through . We
demonstrate our approach for the case where the planning algorithm is
Multi-Heuristic A* (MHA*) and the robot is a 34-DOF humanoid. We show that our
approach allows to compute highly-constrained paths with little domain
knowledge. Without our approach, solving such problems requires
carefully-crafting domain-dependent heuristics
Online, interactive user guidance for high-dimensional, constrained motion planning
We consider the problem of planning a collision-free path for a
high-dimensional robot. Specifically, we suggest a planning framework where a
motion-planning algorithm can obtain guidance from a user. In contrast to
existing approaches that try to speed up planning by incorporating experiences
or demonstrations ahead of planning, we suggest to seek user guidance only when
the planner identifies that it ceases to make significant progress towards the
goal. Guidance is provided in the form of an intermediate configuration
, which is used to bias the planner to go through . We
demonstrate our approach for the case where the planning algorithm is
Multi-Heuristic A* (MHA*) and the robot is a 34-DOF humanoid. We show that our
approach allows to compute highly-constrained paths with little domain
knowledge. Without our approach, solving such problems requires
carefully-crafting domain-dependent heuristics
On the Collaboration of an Automatic Path-Planner and a Human User for Path-Finding in Virtual Industrial Scenes
This paper describes a global interactive framework enabling an automatic path-planner and a user to collaborate for finding a path in cluttered virtual environments. First, a collaborative architecture including the user and the planner is described. Then, for real time purpose, a motion planner divided into different steps is presented. First, a preliminary workspace discretization is done without time limitations at the beginning of the simulation. Then, using these pre-computed data, a second algorithm finds a collision free path in real time. Once the path is found, an haptic artificial guidance on the path is provided to the user. The user can then influence the planner by not following the path and automatically order a new path research. The performances are measured on tests based on assembly simulation in CAD scenes
The minimum energy expenditure shortest path method
This article discusses the addition of an energy parameter to the shortest path execution process; namely, the energy expenditure by a character during execution of the path. Given a simple environment in which a character has the ability to perform actions related to locomotion, such as walking and stair stepping, current techniques execute the shortest path based on the length of the extracted root trajectory. However, actual humans acting in constrained environments do not plan only according to shortest path criterion, they conceptually measure the path that minimizes the amount of energy expenditure. On this basis, it seems that virtual characters should also execute their paths according to the minimization of actual energy expenditure as well. In this article, a simple method that uses a formula for computing vanadium dioxide () levels, which is a proxy for the energy expenditure by humans during various activities, is presented. The presented solution could be beneficial in any situation requiring a sophisticated perspective of the path-execution process. Moreover, it can be implemented in almost every path-planning method that has the ability to measure stepping actions or other actions of a virtual character
Learning to Navigate Cloth using Haptics
We present a controller that allows an arm-like manipulator to navigate
deformable cloth garments in simulation through the use of haptic information.
The main challenge of such a controller is to avoid getting tangled in, tearing
or punching through the deforming cloth. Our controller aggregates force
information from a number of haptic-sensing spheres all along the manipulator
for guidance. Based on haptic forces, each individual sphere updates its target
location, and the conflicts that arise between this set of desired positions is
resolved by solving an inverse kinematic problem with constraints.
Reinforcement learning is used to train the controller for a single
haptic-sensing sphere, where a training run is terminated (and thus penalized)
when large forces are detected due to contact between the sphere and a
simplified model of the cloth. In simulation, we demonstrate successful
navigation of a robotic arm through a variety of garments, including an
isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two
baseline controllers: one without haptics and another that was trained based on
large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A.
Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
- …