35 research outputs found
Multi-View Object Instance Recognition in an Industrial Context
We present a fast object recognition system coding shape by viewpoint invariant geometric relations and appearance information. In our advanced industrial work-cell, the system can observe the work space of the robot by three pairs of Kinect and stereo cameras allowing for reliable and complete object information. From these sensors, we derive global viewpoint invariant shape features and robust color features making use of color normalization techniques.
We show that in such a set-up, our system can achieve high performance already with a very low number of training samples, which is crucial for user acceptance and that the use of multiple views is crucial for performance. This indicates that our approach can be used in controlled but realistic industrial contexts that require—besides high reliability—fast processing and an intuitive and easy use at the end-user side.European UnionDanish Council for Strategic Researc
Ki-Pode: Keypoint-based Implicit Pose Distribution Estimation of Rigid Objects
The estimation of 6D poses of rigid objects is a fundamental problem in
computer vision. Traditionally pose estimation is concerned with the
determination of a single best estimate. However, a single estimate is unable
to express visual ambiguity, which in many cases is unavoidable due to object
symmetries or occlusion of identifying features. Inability to account for
ambiguities in pose can lead to failure in subsequent methods, which is
unacceptable when the cost of failure is high. Estimates of full pose
distributions are, contrary to single estimates, well suited for expressing
uncertainty on pose. Motivated by this, we propose a novel pose distribution
estimation method. An implicit formulation of the probability distribution over
object pose is derived from an intermediary representation of an object as a
set of keypoints. This ensures that the pose distribution estimates have a high
level of interpretability. Furthermore, our method is based on conservative
approximations, which leads to reliable estimates. The method has been
evaluated on the task of rotation distribution estimation on the YCB-V and
T-LESS datasets and performs reliably on all objects.Comment: 11 pages, 2 figure
Multi-View Object Instance Recognition in an Industrial Context
We present a fast object recognition system coding shape by viewpoint invariant geometric relations and appearance information. In our advanced industrial work-cell, the system can observe the work space of the robot by three pairs of Kinect and stereo cameras allowing for reliable and complete object information. From these sensors, we derive global viewpoint invariant shape features and robust color features making use of color normalization techniques.
We show that in such a set-up, our system can achieve high performance already with a very low number of training samples, which is crucial for user acceptance and that the use of multiple views is crucial for performance. This indicates that our approach can be used in controlled but realistic industrial contexts that require—besides high reliability—fast processing and an intuitive and easy use at the end-user side.European UnionDanish Council for Strategic Researc
Local shape feature fusion for improved matching, pose estimation and 3D object recognition
We provide new insights to the problem of shape feature description and matching, techniques that are often applied within 3D object recognition pipelines. We subject several state of the art features to systematic evaluations based on multiple datasets from different sources in a uniform manner. We have carefully prepared and performed a neutral test on the datasets for which the descriptors have shown good recognition performance. Our results expose an important fallacy of previous results, namely that the performance of the recognition system does not correlate well with the performance of the descriptor employed by the recognition system. In addition to this, we evaluate several aspects of the matching task, including the efficiency of the different features, and the potential in using dimension reduction. To arrive at better generalization properties, we introduce a method for fusing several feature matches with a limited processing overhead. Our fused feature matches provide a significant increase in matching accuracy, which is consistent over all tested datasets. Finally, we benchmark all features in a 3D object recognition setting, providing further evidence of the advantage of fused features, both in terms of accuracy and efficiency
A Flexible and Robust Vision Trap for Automated Part Feeder Design
Fast, robust, and flexible part feeding is essential for enabling automation
of low volume, high variance assembly tasks. An actuated vision-based solution
on a traditional vibratory feeder, referred to here as a vision trap, should in
principle be able to meet these demands for a wide range of parts. However, in
practice, the flexibility of such a trap is limited as an expert is needed to
both identify manageable tasks and to configure the vision system. We propose a
novel approach to vision trap design in which the identification of manageable
tasks is automatic and the configuration of these tasks can be delegated to an
automated feeder design system. We show that the trap's capabilities can be
formalized in such a way that it integrates seamlessly into the ecosystem of
automated feeder design. Our results on six canonical parts show great promise
for autonomous configuration of feeder systems.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS 2022