492 research outputs found

    Grasping unknown objects in clutter by superquadric representation

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 robot and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.Peer ReviewedPostprint (author's final draft

    Surface Reconstruction from Unorganized Point Cloud Data via Progressive Local Mesh Matching

    Get PDF
    This thesis presents an integrated triangle mesh processing framework for surface reconstruction based on Delaunay triangulation. It features an innovative multi-level inheritance priority queuing mechanism for seeking and updating the optimum local manifold mesh at each data point. The proposed algorithms aim at generating a watertight triangle mesh interpolating all the input points data when all the fully matched local manifold meshes (umbrellas) are found. Compared to existing reconstruction algorithms, the proposed algorithms can automatically reconstruct watertight interpolation triangle mesh without additional hole-filling or manifold post-processing. The resulting surface can effectively recover the sharp features in the scanned physical object and capture their correct topology and geometric shapes reliably. The main Umbrella Facet Matching (UFM) algorithm and its two extended algorithms are documented in detail in the thesis. The UFM algorithm accomplishes and implements the core surface reconstruction framework based on a multi-level inheritance priority queuing mechanism according to the progressive matching results of local meshes. The first extended algorithm presents a new normal vector combinatorial estimation method for point cloud data depending on local mesh matching results, which is benefit to sharp features reconstruction. The second extended algorithm addresses the sharp-feature preservation issue in surface reconstruction by the proposed normal vector cone (NVC) filtering. The effectiveness of these algorithms has been demonstrated using both simulated and real-world point cloud data sets. For each algorithm, multiple case studies are performed and analyzed to validate its performance

    Acta Cybernetica : Volume 25. Number 2.

    Get PDF

    An Object SLAM Framework for Association, Mapping, and High-Level Tasks

    Full text link
    Object SLAM is considered increasingly significant for robot high-level perception and decision-making. Existing studies fall short in terms of data association, object representation, and semantic mapping and frequently rely on additional assumptions, limiting their performance. In this paper, we present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks. First, we propose an ensemble data association approach for associating objects in complicated conditions by incorporating parametric and nonparametric statistic testing. In addition, we suggest an outlier-robust centroid and scale estimation algorithm for modeling objects based on the iForest and line alignment. Then a lightweight and object-oriented map is represented by estimated general object models. Taking into consideration the semantic invariance of objects, we convert the object map to a topological map to provide semantic descriptors to enable multi-map matching. Finally, we suggest an object-driven active exploration strategy to achieve autonomous mapping in the grasping scenario. A range of public datasets and real-world results in mapping, augmented reality, scene matching, relocalization, and robotic manipulation have been used to evaluate the proposed object SLAM framework for its efficient performance.Comment: Accepted by IEEE Transactions on Robotics(T-RO

    High-resolution digital 3D models of Algar do Penico Chamber: limitations, challenges, and potential

    Get PDF
    The study of karst and its geomorphological structures is important for understanding the relationships between hydrology and climate over geological time. In that context, we conducted a terrestrial laser-scan survey to map geomorphological structures in the karst cave of Algar do Penico in southern Portugal. The point cloud data set obtained was used to generate 3D meshes with different levels of detail, allowing the limitations of mapping capabilities to be explored. In addition to cave mapping, the study focuses on 3D-mesh analysis, including the development of two algorithms for determination of stalactite extremities and contour lines, and on the interactive visualization of 3D meshes on the Web. Data processing and analysis were performed using freely available open-source software. For interactive visualization, we adopted a framework based on Web standards X3D, WebGL, and X3DOM. This solution gives both the general public and researchers access to 3D models and to additional data produced from map tools analyses through a web browser, without the need for plug-ins

    PointPCA: Point Cloud Objective Quality Assessment Using PCA-Based Descriptors

    Full text link
    Point clouds denote a prominent solution for the representation of 3-D photo-realistic content in immersive applications. Similarly to other imaging modalities, quality predictions for point cloud contents are vital for a wide range of applications, enabling trade-off optimizations between data quality and data size in every processing step from acquisition to consumption. In this work, we focus on use cases that consider human end-users consuming point cloud contents and, hence, we concentrate on visual quality metrics. In particular, we propose a set of perceptually-relevant descriptors based on Principal Component Analysis (PCA) decomposition that is applied to both geometry and texture data for full-reference point cloud quality assessment. Statistical features are derived from these descriptors to characterize local shape and appearance properties for both a reference and a distorted point cloud. They are subsequently compared to provide corresponding predictions of visual quality for the latter. As part of our method, a learning-based approach is proposed to fuse these individual quality predictors to a unified perceptual score. Various regression models are additionally evaluated for this task and shown to be effective in harnessing the predictors' strength. We validate the accuracy of the individual quality predictors, as well as the unified quality scores obtained after any regression model against subjectively-annotated datasets, and we show that non-linear regression models exhibit notable gains with respect to current literature. A software implementation of the proposed metric is made available at the following link: https://github.com/cwi-dis/pointpca.Comment: 10 pages, 4 figures, 3 table

    Doctor of Philosophy

    Get PDF
    dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection

    The combination of geomatic approaches and operational modal analysis to improve calibration of finite element models: a case of study in Saint Torcato church (Guimarães, Portugal)

    Get PDF
    This paper present a set of procedures based on laser scanning, photogrammetry (Structure from Motion) and operational modal analysis in order to obtain accurate numeric models which allows identigying architectural complications that arise in historical buildings. In addition, themethod includes tools that facilitate building-damage monitoring tasks. All of these aimed to obtain robust basis for numerical analysis of the actual behavior and monitoring task. This case study seeks to validate said methodologies, using as an example the case of Saint Torcato Church, located in Guimãres, Portugal
    • …
    corecore