862 research outputs found

    SEGCloud: Semantic Segmentation of 3D Point Clouds

    Full text link
    3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.Comment: Accepted as a spotlight at the International Conference of 3D Vision (3DV 2017

    Sensor fusion for semantic segmentation of urban scenes

    Full text link
    Abstract—Semantic understanding of environments is an important problem in robotics in general and intelligent au-tonomous systems in particular. In this paper, we propose a semantic segmentation algorithm which effectively fuses infor-mation from images and 3D point clouds. The proposed method incorporates information from multiple scales in an intuitive and effective manner. A late-fusion architecture is proposed to maximally leverage the training data in each modality. Finally, a pairwise Conditional Random Field (CRF) is used as a post-processing step to enforce spatial consistency in the structured prediction. The proposed algorithm is evaluated on the publicly available KITTI dataset [1] [2], augmented with additional pixel and point-wise semantic labels for building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence regions. A per-pixel accuracy of 89.3 % and average class accuracy of 65.4 % is achieved, well above current state-of-the-art [3]. I

    Multi-Scale Hierarchical Conditional Random Field for Railway Electrification Scene Classification Using Mobile Laser Scanning Data

    Get PDF
    With the recent rapid development of high-speed railway in many countries, precise inspection for railway electrification systems has become more significant to ensure safe railway operation. However, this time-consuming manual inspection is not satisfactory for the high-demanding inspection task, thus a safe, fast and automatic inspection method is required. With LiDAR (Light Detection and Ranging) data becoming more available, the accurate railway electrification scene understanding using LiDAR data becomes feasible towards automatic 3D precise inspection. This thesis presents a supervised learning method to classify railway electrification objects from Mobile Laser Scanning (MLS) data. First, a multi-range Conditional Random Field (CRF), which characterizes not only labeling homogeneity at a short range, but also the layout compatibility between different objects at a middle range in the probabilistic graphical model is implemented and tested. Then, this multi-range CRF model will be extended and improved into a hierarchical CRF model to consider multi-scale layout compatibility at full range. The proposed method is evaluated on a dataset collected in Korea with complex railway electrification systems environment. The experiment shows the effectiveness of proposed model
    • …
    corecore