14 research outputs found

    Topologically Aware Building Rooftop Reconstruction From Airborne Laser Scanning Point Clouds

    No full text

    Outline and Shape Reconstruction in 2D

    No full text
    International audienceOutline and shape reconstruction from unstructured points in a plane is a fundamental problem with many applications that have generated research interest for decades. Involved aspects like handling open, sharp, multiple and non-manifold outlines, run-time and provability, and potential extension to 3D for surface reconstruction have led to many different algorithms. This multitude of reconstruction methods with quite different strengths and focus makes it difficult for users to choose a suitable algorithm for their specific problem. In this tutorial, we present proximity graphs, graph-based algorithms, and algorithms with sampling guarantees, all in detail. Then, we show algorithms targeted at specific problem classes, such as reconstructing from noise, outliers, or sharp corners. Examples of the evaluation will show how its results can guide users in selecting an appropriate algorithm for their input data. As a special application, we show the reconstruction of lines in the context of sketch completion and sketch simplification. Shape characterization of dot patterns will be shown as an additional field closely related to boundary reconstruction

    Outline and Shape Reconstruction in 2D

    No full text
    International audienceOutline and shape reconstruction from unstructured points in a plane is a fundamental problem with many applications that have generated research interest for decades. Involved aspects like handling open, sharp, multiple and non-manifold outlines, run-time and provability, and potential extension to 3D for surface reconstruction have led to many different algorithms. This multitude of reconstruction methods with quite different strengths and focus makes it difficult for users to choose a suitable algorithm for their specific problem. In this tutorial, we present proximity graphs, graph-based algorithms, and algorithms with sampling guarantees, all in detail. Then, we show algorithms targeted at specific problem classes, such as reconstructing from noise, outliers, or sharp corners. Examples of the evaluation will show how its results can guide users in selecting an appropriate algorithm for their input data. As a special application, we show the reconstruction of lines in the context of sketch completion and sketch simplification. Shape characterization of dot patterns will be shown as an additional field closely related to boundary reconstruction

    MC-UNet: Martian Crater Segmentation at Semantic and Instance Levels Using U-Net-Based Convolutional Neural Network

    No full text
    Crater recognition on Mars is of paramount importance for many space science applications, such as accurate planetary surface age dating and geological mapping. Such recognition is achieved by means of various image-processing techniques employing traditional CNNs (convolutional neural networks), which typically suffer from slow convergence and relatively low accuracy. In this paper, we propose a novel CNN, referred to as MC-UNet (Martian Crater U-Net), wherein classical U-Net is employed as the backbone for accurate identification of Martian craters at semantic and instance levels from thermal-emission-imaging-system (THEMIS) daytime infrared images. Compared with classical U-Net, the depth of the layers of MC-UNet is expanded to six, while the maximum number of channels is decreased to one-fourth, thereby making the proposed CNN-based architecture computationally efficient while maintaining a high recognition rate of impact craters on Mars. For enhancing the operation of MC-UNet, we adopt average pooling and embed channel attention into the skip-connection process between the encoder and decoder layers at the same network depth so that large-sized Martian craters can be more accurately recognized. The proposed MC-UNet is adequately trained using 2∼32 km radii Martian craters from THEMIS daytime infrared annotated images. For the predicted Martian crater rim pixels, template matching is subsequently used to recognize Martian craters at the instance level. The experimental results indicate that MC-UNet has the potential to recognize Martian craters with a maximum radius of 31.28 km (136 pixels) with a recall of 0.7916 and F1-score of 0.8355. The promising performance shows that the proposed MC-UNet is on par with or even better than other classical CNN architectures, such as U-Net and Crater U-Net

    Multi-View Features Joint Learning with Label and Local Distribution Consistency for Point Cloud Classification

    No full text
    In outdoor Light Detection and Ranging (lidar)point cloud classification, finding the discriminative features for point cloud perception and scene understanding represents one of the great challenges. The features derived from defect-laden (i.e., noise, outliers, occlusions and irregularities) and raw outdoor LiDAR scans usually contain redundant and irrelevant information which adversely affects the accuracy of point semantic labeling. Moreover, point cloud features of different views have a capability to express different attributes of the same point. The simplest way of concatenating these features of different views cannot guarantee the applicability and effectiveness of the fused features. To solve these problems and achieve outdoor point cloud classification with fewer training samples, we propose a novel multi-view features and classifiers’ joint learning framework. The proposed framework uses label consistency and local distribution consistency of multi-space constraints for multi-view point cloud features extraction and classification. In the framework, the manifold learning is used to carry out subspace joint learning of multi-view features by introducing three kinds of constraints, i.e., local distribution consistency of feature space and position space, label consistency among multi-view predicted labels and ground truth, and label consistency among multi-view predicted labels. The proposed model can be well trained by fewer training points, and an iterative algorithm is used to solve the joint optimization of multi-view feature projection matrices and linear classifiers. Subsequently, the multi-view features are fused and used for point cloud classification effectively. We evaluate the proposed method on five different point cloud scenes and experimental results demonstrate that the classification performance of the proposed method is at par or outperforms the compared algorithms

    AFGL-Net: Attentive Fusion of Global and Local Deep Features for Building Façades Parsing

    No full text
    In this paper, we propose a deep learning framework, namely AFGL-Net to achieve building façade parsing, i.e., obtaining the semantics of small components of building façade, such as windows and doors. To this end, we present an autoencoder embedding position and direction encoding for local feature encoding. The autoencoder enhances the local feature aggregation and augments the representation of skeleton features of windows and doors. We also integrate the Transformer into AFGL-Net to infer the geometric shapes and structural arrangements of façade components and capture the global contextual features. These global features can help recognize inapparent windows/doors from the façade points corrupted with noise, outliers, occlusions, and irregularities. The attention-based feature fusion mechanism is finally employed to obtain more informative features by simultaneously considering local geometric details and the global contexts. The proposed AFGL-Net is comprehensively evaluated on Dublin and RueMonge2014 benchmarks, achieving 67.02% and 59.80% mIoU, respectively. We also demonstrate the superiority of the proposed AFGL-Net by comparing with the state-of-the-art methods and various ablation studies

    Semantic-aware room-level indoor modeling from point clouds

    No full text
    This paper introduces a framework for reconstructing fine-grained room-level models from indoor point clouds. The motivation behind our method stems from the consistent floorwise appearance of building shapes in urban buildings along the vertical direction. To this end, each floor’s points are horizontally sliced to obtain a representative cross-section, from which the linear primitives are detected and enhanced. These linear primitives help to divide the entire space into non-overlapping connected faces with shared edges. These faces are then classified as indoor or outdoor categories by solving a binary energy minimization formulation. The indoor faces are further grouped into each individual rooms with the support of the room semantic map. By propagating and tracing each room’s contour, 2D floor plan can be generated in a semantic-aware manner. These generated 2D floor plans are vertically stretched to match the heights of their respective rooms. Experimental results on six complex scenes from the S3DIS dataset, which encompass both linear and non-linear shapes, demonstrate that our created room models exhibit accurate geometry, correct topology, and rich semantics. The source code of our room-level modeling algorithm is available at https://github.com/indoor-modeling/indoor-modeling
    corecore