101 research outputs found
6D Pose Estimation using an Improved Method based on Point Pair Features
The Point Pair Feature (Drost et al. 2010) has been one of the most
successful 6D pose estimation method among model-based approaches as an
efficient, integrated and compromise alternative to the traditional local and
global pipelines. During the last years, several variations of the algorithm
have been proposed. Among these extensions, the solution introduced by
Hinterstoisser et al. (2016) is a major contribution. This work presents a
variation of this PPF method applied to the SIXD Challenge datasets presented
at the 3rd International Workshop on Recovering 6D Object Pose held at the ICCV
2017. We report an average recall of 0.77 for all datasets and overall recall
of 0.82, 0.67, 0.85, 0.37, 0.97 and 0.96 for hinterstoisser, tless, tudlight,
rutgers, tejani and doumanoglou datasets, respectively
Using PCL Gobal Descriptors in a DenseFusion Architecture
In this paper, we present an alternative architecture to the state-of-the-art
in 6D pose - DenseFusion. We changed the architecture of the method in
the depth feature extraction phase. Instead of using the PointNet, as used
in the original DenseFusion, we used global descriptors from the Point
Cloud Library (PCL) to extract features. We made a comparison in terms
of average accuracy between the Ensemble of Shape Functions (ESF),
Viewpoint Feature Histogram (VHF) and the original PointNet.info:eu-repo/semantics/publishedVersio
Semantic Robot Programming for Goal-Directed Manipulation in Cluttered Scenes
We present the Semantic Robot Programming (SRP) paradigm as a convergence of
robot programming by demonstration and semantic mapping. In SRP, a user can
directly program a robot manipulator by demonstrating a snapshot of their
intended goal scene in workspace. The robot then parses this goal as a scene
graph comprised of object poses and inter-object relations, assuming known
object geometries. Task and motion planning is then used to realize the user's
goal from an arbitrary initial scene configuration. Even when faced with
different initial scene configurations, SRP enables the robot to seamlessly
adapt to reach the user's demonstrated goal. For scene perception, we propose
the Discriminatively-Informed Generative Estimation of Scenes and Transforms
(DIGEST) method to infer the initial and goal states of the world from RGBD
images. The efficacy of SRP with DIGEST perception is demonstrated for the task
of tray-setting with a Michigan Progress Fetch robot. Scene perception and task
execution are evaluated with a public household occlusion dataset and our
cluttered scene dataset.Comment: published in ICRA 201
- …