53 research outputs found
Learning to Singulate Objects using a Push Proposal Network
Learning to act in unstructured environments, such as cluttered piles of
objects, poses a substantial challenge for manipulation robots. We present a
novel neural network-based approach that separates unknown objects in clutter
by selecting favourable push actions. Our network is trained from data
collected through autonomous interaction of a PR2 robot with randomly organized
tabletop scenes. The model is designed to propose meaningful push actions based
on over-segmented RGB-D images. We evaluate our approach by singulating up to 8
unknown objects in clutter. We demonstrate that our method enables the robot to
perform the task with a high success rate and a low number of required push
actions. Our results based on real-world experiments show that our network is
able to generalize to novel objects of various sizes and shapes, as well as to
arbitrary object configurations. Videos of our experiments can be viewed at
http://robotpush.cs.uni-freiburg.deComment: International Symposium on Robotics Research (ISRR) 2017, videos:
http://robotpush.cs.uni-freiburg.d
Interactive Perception Based on Gaussian Process Classification for House-Hold Objects Recognition and Sorting
We present an interactive perception model for
object sorting based on Gaussian Process (GP) classification
that is capable of recognizing objects categories from point
cloud data. In our approach, FPFH features are extracted from
point clouds to describe the local 3D shape of objects and
a Bag-of-Words coding method is used to obtain an object-level
vocabulary representation. Multi-class Gaussian Process
classification is employed to provide and probable estimation of
the identity of the object and serves a key role in the interactive
perception cycle – modelling perception confidence. We show
results from simulated input data on both SVM and GP based
multi-class classifiers to validate the recognition accuracy of our
proposed perception model. Our results demonstrate that by
using a GP-based classifier, we obtain true positive classification
rates of up to 80%. Our semi-autonomous object sorting
experiments show that the proposed GP based interactive
sorting approach outperforms random sorting by up to 30%
when applied to scenes comprising configurations of household
objects
Parallel Monte Carlo Tree Search with Batched Rigid-body Simulations for Speeding up Long-Horizon Episodic Robot Planning
We propose a novel Parallel Monte Carlo tree search with Batched Simulations
(PMBS) algorithm for accelerating long-horizon, episodic robotic planning
tasks. Monte Carlo tree search (MCTS) is an effective heuristic search
algorithm for solving episodic decision-making problems whose underlying search
spaces are expansive. Leveraging a GPU-based large-scale simulator, PMBS
introduces massive parallelism into MCTS for solving planning tasks through the
batched execution of a large number of concurrent simulations, which allows for
more efficient and accurate evaluations of the expected cost-to-go over large
action spaces. When applied to the challenging manipulation tasks of object
retrieval from clutter, PMBS achieves a speedup of over with an
improved solution quality, in comparison to a serial MCTS implementation. We
show that PMBS can be directly applied to real robot hardware with negligible
sim-to-real differences. Supplementary material, including video, can be found
at https://github.com/arc-l/pmbs.Comment: Accepted for IROS 202
Mechanical Search: Multi-Step Retrieval of a Target Object Occluded by Clutter
When operating in unstructured environments such as warehouses, homes, and
retail centers, robots are frequently required to interactively search for and
retrieve specific objects from cluttered bins, shelves, or tables. Mechanical
Search describes the class of tasks where the goal is to locate and extract a
known target object. In this paper, we formalize Mechanical Search and study a
version where distractor objects are heaped over the target object in a bin.
The robot uses an RGBD perception system and control policies to iteratively
select, parameterize, and perform one of 3 actions -- push, suction, grasp --
until the target object is extracted, or either a time limit is exceeded, or no
high confidence push or grasp is available. We present a study of 5 algorithmic
policies for mechanical search, with 15,000 simulated trials and 300 physical
trials for heaps ranging from 10 to 20 objects. Results suggest that success
can be achieved in this long-horizon task with algorithmic policies in over 95%
of instances and that the number of actions required scales approximately
linearly with the size of the heap. Code and supplementary material can be
found at http://ai.stanford.edu/mech-search .Comment: To appear in IEEE International Conference on Robotics and Automation
(ICRA), 2019. 9 pages with 4 figure
Domain-Independent Disperse and Pick method for Robotic Grasping
Picking unseen objects from clutter is a difficult problem because of the
variability in objects (shape, size, and material) and occlusion due to
clutter. As a result, it becomes difficult for grasping methods to segment the
objects properly and they fail to singulate the object to be picked. This may
result in grasp failure or picking of multiple objects together in a single
attempt. A push-to-move action by the robot will be beneficial to disperse the
objects in the workspace and thus assist the grasping and vision algorithm. We
propose a disperse and pick method for domain-independent robotic grasping in a
highly cluttered heap of objects. The novel contribution of our framework is
the introduction of a heuristic clutter removal method that does not require
deep learning and can work on unseen objects. At each iteration of the
algorithm, the robot either performs a push-to-move action or a grasp action
based on the estimated clutter profile. For grasp planning, we present an
improved and adaptive version of a recent domain-independent grasping method.
The efficacy of the integrated system is demonstrated in simulation as well as
in the real-world.Comment: Published at 2022 International Joint Conference on Neural Networks
(IJCNN
- …