Functional Autonomy Techniques for Manipulation in Uncertain Environments

Abstract

As robotic platforms are put to work in an ever more diverse array of environments, their ability to deploy visuomotor capabilities without supervision is complicated by the potential for unforeseen operating conditions. This is a particular challenge within the domain of manipulation, where significant geometric, semantic, and kinetic understanding across the space of possible manipulands is necessary to allow effective interaction. To facilitate adoption of robotic platforms in such environments, this work investigates the application of functional, or behavior level, autonomy to the task of manipulation in uncertain environments. Three functional autonomy techniques are presented to address subproblems within the domain. The task of reactive selection between a set of actions that incur a probabilistic cost to advance the same goal metric in the presence of an operator action preference is formulated as the Obedient Multi-Armed Bandit (OMAB) problem, under the purview of Reinforcement Learning. A policy for the problem is presented and evaluated against a novel performance metric, disappointment (analogous to prototypical MAB's regret), in comparison to adaptations of existing MAB policies. This is posed for both stationary and non-stationary cost distributions, within the context of two example planetary exploration applications of multi-modal mobility, and surface excavation. Second, a computational model that derives semantic meaning from the outcome of manipulation tasks is developed, which leverages physics simulation and clustering to learn symbolic failure modes. A deep network extracts visual signatures for each mode that may then guide failure recovery. The model is demonstrated through application to the archetypal manipulation task of placing objects into a container, as well as stacking of cuboids, and evaluated against both synthetic verification sets and real depth images. Third, an approach is presented for visual estimation of the minimum magnitude grasping wrench necessary to extract massive objects from an unstructured pile, subject to a given end effector's grasping limits, that is formulated for each object as a "wrench space stiction manifold". Properties are estimated from segmented RGBD point clouds, and a geometric adjacency graph used to infer incident wrenches upon each object, allowing candidate extraction object/force-vector pairs to be selected from the pile that are likely to be within the system's capability.</p

    Similar works