18,249 research outputs found

    Grasping unknown objects in clutter by superquadric representation

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 robot and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.Peer ReviewedPostprint (author's final draft

    Symmetry-guided nonrigid registration: the case for distortion correction in multidimensional photoemission spectroscopy

    Full text link
    Image symmetrization is an effective strategy to correct symmetry distortion in experimental data for which symmetry is essential in the subsequent analysis. In the process, a coordinate transform, the symmetrization transform, is required to undo the distortion. The transform may be determined by image registration (i.e. alignment) with symmetry constraints imposed in the registration target and in the iterative parameter tuning, which we call symmetry-guided registration. An example use case of image symmetrization is found in electronic band structure mapping by multidimensional photoemission spectroscopy, which employs a 3D time-of-flight detector to measure electrons sorted into the momentum (kxk_x, kyk_y) and energy (EE) coordinates. In reality, imperfect instrument design, sample geometry and experimental settings cause distortion of the photoelectron trajectories and, therefore, the symmetry in the measured band structure, which hinders the full understanding and use of the volumetric datasets. We demonstrate that symmetry-guided registration can correct the symmetry distortion in the momentum-resolved photoemission patterns. Using proposed symmetry metrics, we show quantitatively that the iterative approach to symmetrization outperforms its non-iterative counterpart in the restored symmetry of the outcome while preserving the average shape of the photoemission pattern. Our approach is generalizable to distortion corrections in different types of symmetries and should also find applications in other experimental methods that produce images with similar features

    Recovering 6D Object Pose: A Review and Multi-modal Analysis

    Full text link
    A large number of studies analyse object detection and pose estimation at visual level in 2D, discussing the effects of challenges such as occlusion, clutter, texture, etc., on the performances of the methods, which work in the context of RGB modality. Interpreting the depth data, the study in this paper presents thorough multi-modal analyses. It discusses the above-mentioned challenges for full 6D object pose estimation in RGB-D images comparing the performances of several 6D detectors in order to answer the following questions: What is the current position of the computer vision community for maintaining "automation" in robotic manipulation? What next steps should the community take for improving "autonomy" in robotics while handling objects? Our findings include: (i) reasonably accurate results are obtained on textured-objects at varying viewpoints with cluttered backgrounds. (ii) Heavy existence of occlusion and clutter severely affects the detectors, and similar-looking distractors is the biggest challenge in recovering instances' 6D. (iii) Template-based methods and random forest-based learning algorithms underlie object detection and 6D pose estimation. Recent paradigm is to learn deep discriminative feature representations and to adopt CNNs taking RGB images as input. (iv) Depending on the availability of large-scale 6D annotated depth datasets, feature representations can be learnt on these datasets, and then the learnt representations can be customized for the 6D problem
    corecore