7,041 research outputs found
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
Camera viewpoint selection is an important aspect of visual grasp detection,
especially in clutter where many occlusions are present. Where other approaches
use a static camera position or fixed data collection routines, our Multi-View
Picking (MVP) controller uses an active perception approach to choose
informative viewpoints based directly on a distribution of grasp pose estimates
in real time, reducing uncertainty in the grasp poses caused by clutter and
occlusions. In trials of grasping 20 objects from clutter, our MVP controller
achieves 80% grasp success, outperforming a single-viewpoint grasp detector by
12%. We also show that our approach is both more accurate and more efficient
than approaches which consider multiple fixed viewpoints.Comment: ICRA 2019 Video: https://youtu.be/Vn3vSPKlaEk Code:
https://github.com/dougsm/mvp_gras
Active SLAM for autonomous underwater exploration
Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.Peer ReviewedPostprint (published version
Viewpoint Push Planning for Mapping of Unknown Confined Spaces
Viewpoint planning is an important task in any application where objects or
scenes need to be viewed from different angles to achieve sufficient coverage.
The mapping of confined spaces such as shelves is an especially challenging
task since objects occlude each other and the scene can only be observed from
the front, posing limitations on the possible viewpoints. In this paper, we
propose a deep reinforcement learning framework that generates promising views
aiming at reducing the map entropy. Additionally, the pipeline extends standard
viewpoint planning by predicting adequate minimally invasive push actions to
uncover occluded objects and increase the visible space. Using a 2.5D occupancy
height map as state representation that can be efficiently updated, our system
decides whether to plan a new viewpoint or perform a push. To learn feasible
pushes, we use a neural network to sample push candidates on the map based on
training data provided by human experts. As simulated and real-world
experimental results with a robotic arm show, our system is able to
significantly increase the mapped space compared to different baselines, while
the executed push actions highly benefit the viewpoint planner with only minor
changes to the object configuration.Comment: In: Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), 202
- …