902 research outputs found

    Supervised and unsupervised learning in vision-guided robotic bin picking applications for mixed-model assembly

    Get PDF
    Mixed-model assembly usually involves numerous component variants that require effective materials supply. Here, picking activities are often performed manually, but the prospect of robotics for bin picking has potential to improve quality while reducing man-hour consumption. Robots can make use of vision systems to learn how to perform their tasks. This paper aims to understand the differences in two learning approaches, supervised learning, and unsupervised learning. An experiment containing engineering preparation time (EPT) and recognition quality (RQ) is performed. The findings show an improved RQ but longer EPT with a supervised compared to an unsupervised approach

    Vision-Based Robotic Grasping of Reels for Automatic Packaging Machines

    Get PDF
    In this work, we present a vision system particularly suited to the automatic recognition of reels in the field of automatic packaging machines. The output of the vision system is used to guide the autonomous grasping of the reels by a robot for a subsequent manipulation task. The proposed solution is built around three different methods to solve the ellipse-detection problem in an image. Such methods leverage standard image processing and mathematical algorithms, which are tailored to the targeted application. An experimental campaign demonstrates the efficacy of the proposed approach, even in the presence of low computational power and limited hardware resources, as in the use-case at hand

    Challenges for Monocular 6D Object Pose Estimation in Robotics

    Full text link
    Object pose estimation is a core perception task that enables, for example, object grasping and scene understanding. The widely available, inexpensive and high-resolution RGB sensors and CNNs that allow for fast inference based on this modality make monocular approaches especially well suited for robotics applications. We observe that previous surveys on object pose estimation establish the state of the art for varying modalities, single- and multi-view settings, and datasets and metrics that consider a multitude of applications. We argue, however, that those works' broad scope hinders the identification of open challenges that are specific to monocular approaches and the derivation of promising future challenges for their application in robotics. By providing a unified view on recent publications from both robotics and computer vision, we find that occlusion handling, novel pose representations, and formalizing and improving category-level pose estimation are still fundamental challenges that are highly relevant for robotics. Moreover, to further improve robotic performance, large object sets, novel objects, refractive materials, and uncertainty estimates are central, largely unsolved open challenges. In order to address them, ontological reasoning, deformability handling, scene-level reasoning, realistic datasets, and the ecological footprint of algorithms need to be improved.Comment: arXiv admin note: substantial text overlap with arXiv:2302.1182
    • …
    corecore