21 research outputs found

    A Review of Panoptic Segmentation for Mobile Mapping Point Clouds

    Full text link
    3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels. That dataset and our source code are publicly available. We discuss which adaptations are need to adapt current panoptic segmentation methods to outdoor scenes and large objects. Our study finds that for mobile mapping data, KPConv performs best but is slower, while PointNet++ is fastest but performs significantly worse. Sparse CNNs are in between. Regardless of the backbone, Instance segmentation by clustering embedding features is better than using shifted coordinates

    AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation

    Full text link
    During interactive segmentation, a model and a user work together to delineate objects of interest in a 3D point cloud. In an iterative process, the model assigns each data point to an object (or the background), while the user corrects errors in the resulting segmentation and feeds them back into the model. The current best practice formulates the problem as binary classification and segments objects one at a time. The model expects the user to provide positive clicks to indicate regions wrongly assigned to the background and negative clicks on regions wrongly assigned to the object. Sequentially visiting objects is wasteful since it disregards synergies between objects: a positive click for a given object can, by definition, serve as a negative click for nearby objects. Moreover, a direct competition between adjacent objects can speed up the identification of their common boundary. We introduce AGILE3D, an efficient, attention-based model that (1) supports simultaneous segmentation of multiple 3D objects, (2) yields more accurate segmentation masks with fewer user clicks, and (3) offers faster inference. Our core idea is to encode user clicks as spatial-temporal queries and enable explicit interactions between click queries as well as between them and the 3D scene through a click attention module. Every time new clicks are added, we only need to run a lightweight decoder that produces updated segmentation masks. In experiments with four different 3D point cloud datasets, AGILE3D sets a new state-of-the-art. Moreover, we also verify its practicality in real-world setups with real user studies.Comment: Project page: https://ywyue.github.io/AGILE3

    A Review of panoptic segmentation for mobile mapping point clouds

    No full text
    3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels. That dataset and our source code are publicly available.1We discuss which adaptations are need to adapt current panoptic segmentation methods to outdoor scenes and large objects. Our study finds that for mobile mapping data, KPConv performs best but is slower, while PointNet++ is fastest but performs significantly worse. Sparse CNNs are in between. Regardless of the backbone, instance segmentation by clustering embedding features is better than using shifted coordinates.ISSN:0924-271

    ImpliCity: City Modeling From Satellite Images with Deep Implicit Occupancy Fields

    No full text
    High-resolution optical satellite sensors, combined with dense stereo algorithms, have made it possible to reconstruct 3D city models from space. However, these models are, in practice, rather noisy and tend to miss small geometric features that are clearly visible in the images. We argue that one reason for the limited quality may be a too early, heuristic reduction of the triangulated 3D point cloud to an explicit height field or surface mesh. To make full use of the point cloud and the underlying images, we introduce IMPLICITY, a neural representation of the 3D scene as an implicit, continuous occupancy field, driven by learned embeddings of the point cloud and a stereo pair of ortho-photos. We show that this representation enables the extraction of high-quality DSMs: with image resolution 0.5 m, IMPLICITY reaches a median height error of ≈0.7m and outperforms competing methods, especially w.r.t. building reconstruction, featuring intricate roof details, smooth surfaces, and straight, regular outlines.ISSN:2194-9042ISSN:2194-905

    Primitive Genepools of Asian Pears and Their Complex Hybrid Origins Inferred from Fluorescent Sequence-Specific Amplification Polymorphism (SSAP) Markers Based on LTR Retrotransposons

    No full text
    <div><p>Recent evidence indicated that interspecific hybridization was the major mode of evolution in <i>Pyrus</i>. The genetic relationships and origins of the Asian pear are still unclear because of frequent hybrid events, fast radial evolution, and lack of informative data. Here, we developed fluorescent sequence-specific amplification polymorphism (SSAP) markers with lots of informative sites and high polymorphism to analyze the population structure among 93 pear accessions, including nearly all species native to Asia. Results of a population structure analysis indicated that nearly all Asian pear species experienced hybridization, and originated from five primitive genepools. Four genepools corresponded to four primary Asian species: <i>P</i>. <i>betulaefolia</i>, <i>P</i>. <i>pashia</i>, <i>P</i>. <i>pyrifolia</i>, and <i>P</i>. <i>ussuriensis</i>. However, cultivars of <i>P</i>. <i>ussuriensis</i> were not monophyletic and introgression occurred from <i>P</i>. <i>pyrifolia</i>. The specific genepool detected in putative hybrids between occidental and oriental pears might be from occidental pears. The remaining species, including <i>P</i>. <i>calleryana</i>, <i>P</i>. <i>xerophila</i>, <i>P</i>. <i>sinkiangensis</i>, <i>P</i>. <i>phaeocarpa</i>, <i>P</i>. <i>hondoensis</i>, and <i>P</i>. <i>hopeiensis</i> in Asia, were inferred to be of hybrid origins and their possible genepools were identified. This study will be of great help for understanding the origin and evolution of Asian pears.</p></div
    corecore