8 research outputs found

    The contribution of deep learning to the semantic segmentation of 3D point-clouds in urban areas

    Full text link
    peer reviewedSemantic segmentation in a large-scale urban environment is crucial for a deep and rigorous understanding of urban environments. The development of Lidar tools in terms of resolution and precision offers a good opportunity to satisfy the need of developing 3D city models. In this context, deep learning revolutionizes the field of computer vision and demonstrates a good performance in semantic segmentation. To achieve this objective, we propose to design a scientific methodology involving a method of deep learning by integrating several data sources (Lidar data, aerial images, etc) to recognize objects semantically and automatically. We aim at extracting automatically the maximum amount of semantic information in a urban environment with a high accuracy and performance

    Toward a Deep Learning Approach for Automatic Semantic Segmentation of 3D Lidar Point Clouds in Urban Areas

    Full text link
    peer reviewedSemantic segmentation of Lidar data using Deep Learning (DL) is a fundamental step for a deep and rigorous understanding of large-scale urban areas. Indeed, the increasing development of Lidar technology in terms of accuracy and spatial resolution offers a best opportunity for delivering a reliable semantic segmentation in large-scale urban environments. Significant progress has been reported in this direction. However, the literature lacks a deep comparison of the existing methods and algorithms in terms of strengths and weakness. The aim of the present paper is therefore to propose an objective review about these methods by highlighting their strengths and limitations. We then propose a new approach based on the combination of Lidar data and other sources in conjunction with a Deep Learning technique whose objective is to automatically extract semantic information from airborne Lidar point clouds by enhancing both accuracy and semantic precision compared to the existing methods. We finally present the first results of our approach

    OBTENTION D’OBJETS SÉMANTIQUES 3D POUR LES APPLICATIONS URBAINES - SEM3D

    Full text link
    Ce projet fournit à la ville de Liège des procédures d’extraction d’objets 3D urbains nécessaire à l’implémentation de jumeaux numériques urbains. La chaîne de traitement repose notamment sur l’application de méthodes de deeplearning sur des données Lidar et des orthophotos de la région wallonne.SEM 3

    La Détection des changements tridimensionnels à l'aide de nuages de points : Une revue

    Full text link
    peer reviewedChange detection is an important step for the characterization of object dynamics at the earth’s surface. In multi-temporal point clouds, the main challenge is to detect true changes at different granularities in a scene subject to significant noise and occlusion. To better understand new research perspectives in this field, a deep review of recent advances in 3D change detection methods is needed. To this end, we present a comprehensive review of the state of the art of 3D change detection approaches, mainly those using 3D point clouds. We review standard methods and recent advances in the use of machine and deep learning for change detection. In addition, the paper presents a summary of 3D point cloud benchmark datasets from different sensors (aerial, mobile, and static), together with associated information. We also investigate representative evaluation metrics for this task. To finish, we present open questions and research perspectives. By reviewing the relevant papers in the field, we highlight the potential of bi- and multi-temporal point clouds for better monitoring analysis for various applications.11. Sustainable cities and communitie

    Ensemble de données de nuages de points multi-contexte et apprentissage automatique pour la segmentation sémantique des chemins de fer

    Full text link
    peer reviewedRailway scene understanding is crucial for various applications, including autonomous trains, digital twining, and infrastructure change monitoring. However, the development of the latter is constrained by the lack of annotated datasets and limitations of existing algorithms. To address this challenge, we present Rail3D, the first comprehensive dataset for semantic segmentation in railway environments with a comparative analysis. Rail3D encompasses three distinct railway contexts from Hungary, France, and Belgium, capturing a wide range of railway assets and conditions. With over 288 million annotated points, Rail3D surpasses existing datasets in size and diversity, enabling the training of generalizable machine learning models. We conducted a generic classification with nine universal classes (Ground, Vegetation, Rail, Poles, Wires, Signals, Fence, Installation, and Building) and evaluated the performance of three state-of-the-art models: KPConv (Kernel Point Convolution), LightGBM, and Random Forest. The best performing model, a fine-tuned KPConv, achieved a mean Intersection over Union (mIoU) of 86%. While the LightGBM-based method achieved a mIoU of 71%, outperforming Random Forest. This study will benefit infrastructure experts and railway researchers by providing a comprehensive dataset and benchmarks for 3D semantic segmentation. The data and code are publicly available for France and Hungary, with continuous updates based on user feedback

    A Prior Level Fusion Approach for the Semantic Segmentation of 3D Point Clouds Using Deep Learning

    Full text link
    peer reviewedThree-dimensional digital models play a pivotal role in city planning, monitoring, and sustainable management of smart and Digital Twin Cities (DTCs). In this context, semantic segmentation of airborne 3D point clouds is crucial for modeling, simulating, and understanding large-scale urban environments. Previous research studies have demonstrated that the performance of 3D semantic segmentation can be improved by fusing 3D point clouds and other data sources. In this paper, a new prior-level fusion approach is proposed for semantic segmentation of large-scale urban areas using optical images and point clouds. The proposed approach uses image classification obtained by the Maximum Likelihood Classifier as the prior knowledge for 3D semantic segmentation. Afterwards, the raster values from classified images are assigned to Lidar point clouds at the data preparation step. Finally, an advanced Deep Learning model (RandLaNet) is adopted to perform the 3D semantic segmentation. The results show that the proposed approach provides good results in terms of both evaluation metrics and visual examination with a higher Intersection over Union (96%) on the created dataset, compared with (92%) for the non-fusion approach
    corecore