276 research outputs found

    Multi-armed bandit models for 2D grasp planning with uncertainty

    Full text link
    Abstract — For applications such as warehouse order fulfill-ment, robot grasps must be robust to uncertainty arising from sensing, mechanics, and control. One way to achieve robustness is to evaluate the performance of candidate grasps by sampling perturbations in shape, pose, and gripper approach and to com-pute the probability of force closure for each candidate to iden-tify a grasp with the highest expected quality. Since evaluating the quality of each grasp is computationally demanding, prior work has turned to cloud computing. To improve computational efficiency and to extend this work, we consider how Multi-Armed Bandit (MAB) models for optimizing decisions can be applied in this context. We formulate robust grasp planning as a MAB problem and evaluate convergence times towards an optimal grasp candidate using 100 object shapes from the Brown Vision 2D Lab Dataset with 1000 grasp candidates per object. We consider the case where shape uncertainty is represented as a Gaussian process implicit surface (GPIS) with Gaussian uncertainty in pose, gripper approach angle, and coefficient of friction. We find that Thompson Sampling and the Gittins index MAB methods converged to within 3 % of the optimal grasp up to 10x faster than uniform allocation and 5x faster than iterative pruning. I

    Uncertainty-driven Affordance Discovery for Efficient Robotics Manipulation

    Full text link
    Robotics affordances, providing information about what actions can be taken in a given situation, can aid robotics manipulation. However, learning about affordances requires expensive large annotated datasets of interactions or demonstrations. In this work, we show active learning can mitigate this problem and propose the use of uncertainty to drive an interactive affordance discovery process. We show that our method enables the efficient discovery of visual affordances for several action primitives, such as grasping, stacking objects, or opening drawers, strongly improving data efficiency and allowing us to learn grasping affordances on a real-world setup with an xArm 6 robot arm in a small number of trials.Comment: Presented at the GMPL workshop @ RSS 202

    Adaptive modality selection algorithm in robot-assisted cognitive training

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Interaction of socially assistive robots with users is based on social cues coming from different interaction modalities, such as speech or gestures. However, using all modalities at all times may be inefficient as it can overload the user with redundant information and increase the task completion time. Additionally, users may favor certain modalities over the other as a result of their disability or personal preference. In this paper, we propose an Adaptive Modality Selection (AMS) algorithm that chooses modalities depending on the state of the user and the environment, as well as user preferences. The variables that describe the environment and the user state are defined as resources, and we posit that modalities are successful if certain resources possess specific values during their use. Besides the resources, the proposed algorithm takes into account user preferences which it learns while interacting with users. We tested our algorithm in simulations, and we implemented it on a robotic system that provides cognitive training, specifically Sequential memory exercises. Experimental results show that it is possible to use only a subset of available modalities without compromising the interaction. Moreover, we see a trend for users to perform better when interacting with a system with implemented AMS algorithm.Peer ReviewedPostprint (author's final draft

    Functional Autonomy Techniques for Manipulation in Uncertain Environments

    Get PDF
    As robotic platforms are put to work in an ever more diverse array of environments, their ability to deploy visuomotor capabilities without supervision is complicated by the potential for unforeseen operating conditions. This is a particular challenge within the domain of manipulation, where significant geometric, semantic, and kinetic understanding across the space of possible manipulands is necessary to allow effective interaction. To facilitate adoption of robotic platforms in such environments, this work investigates the application of functional, or behavior level, autonomy to the task of manipulation in uncertain environments. Three functional autonomy techniques are presented to address subproblems within the domain. The task of reactive selection between a set of actions that incur a probabilistic cost to advance the same goal metric in the presence of an operator action preference is formulated as the Obedient Multi-Armed Bandit (OMAB) problem, under the purview of Reinforcement Learning. A policy for the problem is presented and evaluated against a novel performance metric, disappointment (analogous to prototypical MAB's regret), in comparison to adaptations of existing MAB policies. This is posed for both stationary and non-stationary cost distributions, within the context of two example planetary exploration applications of multi-modal mobility, and surface excavation. Second, a computational model that derives semantic meaning from the outcome of manipulation tasks is developed, which leverages physics simulation and clustering to learn symbolic failure modes. A deep network extracts visual signatures for each mode that may then guide failure recovery. The model is demonstrated through application to the archetypal manipulation task of placing objects into a container, as well as stacking of cuboids, and evaluated against both synthetic verification sets and real depth images. Third, an approach is presented for visual estimation of the minimum magnitude grasping wrench necessary to extract massive objects from an unstructured pile, subject to a given end effector's grasping limits, that is formulated for each object as a "wrench space stiction manifold". Properties are estimated from segmented RGBD point clouds, and a geometric adjacency graph used to infer incident wrenches upon each object, allowing candidate extraction object/force-vector pairs to be selected from the pile that are likely to be within the system's capability.</p
    • …
    corecore