21 research outputs found

    Semantic Labeling of Mobile LiDAR Point Clouds via Active Learning and Higher Order MRF

    Get PDF
    【Abstract】Using mobile Light Detection and Ranging point clouds to accomplish road scene labeling tasks shows promise for a variety of applications. Most existing methods for semantic labeling of point clouds require a huge number of fully supervised point cloud scenes, where each point needs to be manually annotated with a specific category. Manually annotating each point in point cloud scenes is labor intensive and hinders practical usage of those methods. To alleviate such a huge burden of manual annotation, in this paper, we introduce an active learning method that avoids annotating the whole point cloud scenes by iteratively annotating a small portion of unlabeled supervoxels and creating a minimal manually annotated training set. In order to avoid the biased sampling existing in traditional active learning methods, a neighbor-consistency prior is exploited to select the potentially misclassified samples into the training set to improve the accuracy of the statistical model. Furthermore, lots of methods only consider short-range contextual information to conduct semantic labeling tasks, but ignore the long-range contexts among local variables. In this paper, we use a higher order Markov random field model to take into account more contexts for refining the labeling results, despite of lacking fully supervised scenes. Evaluations on three data sets show that our proposed framework achieves a high accuracy in labeling point clouds although only a small portion of labels is provided. Moreover, comparative experiments demonstrate that our proposed framework is superior to traditional sampling methods and exhibits comparable performance to those fully supervised models.10.13039/501100001809-National Natural Science Foundation of China; Collaborative Innovation Center of Haixi Government Affairs Big Data Sharin

    Realistic correction of sky-coloured points in Mobile Laser Scanning point clouds

    Get PDF
    The enrichment of the point clouds with colour images improves the visualisation of the data as well as the segmentation and recognition processes. Coloured point clouds are becoming increasingly common, however, the colour they display is not always as expected. Errors in the colouring of point clouds acquired with Mobile Laser Scanning are due to perspective in the camera image, different resolution or poor calibration between the LiDAR sensor and the image sensor. The consequences of these errors are noticeable in elements captured in images, but not in point clouds, such as the sky. This paper focuses on the correction of the sky-coloured points, without resorting to the images that were initially used to colour the whole point cloud. The proposed method consists of three stages. First the region of interest where the erroneously coloured points are accumulated, is selected. Second, the sky-coloured points are detected by calculating the colour distance in the Lab colour space to a sample of the sky-colour. And third, the colour of the sky-coloured detected points is restored from the colour of the nearby points. The method is tested in ten real case studies with their corresponding point clouds from urban and rural areas. In two case studies, sky-coloured points were assigned manually and the remaining eight case studies, the sky-coloured points are derived from the acquisition errors. The algorithm for sky-coloured points detection obtained an average F1-score of 94.7%. The results show a correct reassignment of colour, texture, and patterns, while improving the point cloud visualisation.Financiado para publicación en acceso aberto: Universidade de Vigo/CISUGXunta de Galicia | Ref. ED481B-2019-061Xunta de Galicia | Ref. ED431C 2020/01Agencia Estatal de Investigación | Ref. PID2019-105221RB-C43Agencia Estatal de Investigación | Ref. PID2019-108816RB-I0

    Semantic segmentation of outdoor scenes using LIDAR cloud point

    Get PDF
    In this paper we present a novel street scene semantic recognition framework, which takes advantage of 3D point clouds captured by a high definition LiDAR laser scanner. An important problem in object recognition is the need for sufficient labeled training data to learn robust classifiers. In this paper we show how to significantly re-duce the need for manually labeled training data by reduction of scene complexity using non-supervised ground and building segmentation. Our system first automatically seg-ments grounds point cloud, this is because the ground connects almost all other objects and we will use a connect component based algorithm to over segment the point clouds. Then, using binary range image processing building facades will be detected. Remained point cloud will grouped into voxels which are then transformed to super voxels. Local 3D features extracted from super voxels are classified by trained boosted decision trees and labeled with semantic classes e.g. tree, pedestrian, car. Given labeled 3D points cloud and 2D image with known viewing camera pose, the proposed association module aligned collections of 3D points to the groups of 2D image pixel to parsing 2D cubic images. One noticeable advantage of our method is the robustness to different lighting condition, shadows and city landscape. The proposed method is evaluated both quantitatively and qualitatively on a challenging fixed-position Terrestrial Laser Scanning (TLS) Velodyne data set and Mobile Laser Scanning (MLS), NAVTEQ True databases. Robust scene parsing results are reported

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented

    Semi-automated Generation of Road Transition Lines Using Mobile Laser Scanning Data

    Get PDF
    Recent advances in autonomous vehicles (AVs) are exponential. Prominent car manufacturers, academic institutions, and corresponding governmental departments around the world are taking active roles in the AV industry. Although the attempts to integrate AV technology into smart roads and smart cities have been in the works for more than half a century, the High Definition Road Maps (HDRMs) that assists full self-driving autonomous vehicles did not yet exist. Mobile Laser Scanning (MLS) has enormous potential in the construction of HDRMs due to its flexibility in collecting wide coverage of street scenes and 3D information on scanned targets. However, without proper and efficient execution, it is difficult to generate HDRMs from MLS point clouds. This study recognizes the research gaps and difficulties in generating transition lines (the paths that pass through a road intersection) in road intersections from MLS point clouds. The proposed method contains three modules: road surface detection, lane marking extraction, and transition line generation. Firstly, the points covering road surface are extracted using the voxel- based upward-growing and the improved region growing. Then, lane markings are extracted and identified according to the multi-thresholding and the geometric filtering. Finally, transition lines are generated through a combination of the lane node structure generation algorithm and the cubic Catmull-Rom spline algorithm. The experimental results demonstrate that transition lines can be successfully generated for both T- and cross-intersections with promising accuracy. In the validation of lane marking extraction using the manually interpreted lane marking points, the method can achieve 90.80% precision, 92.07% recall, and 91.43% F1-score, respectively. The success rate of transition line generation is 96.5%. Furthermore, the Buffer-overlay-statistics (BOS) method validates that the proposed method can generate lane centerlines and transition lines within 20 cm-level localization accuracy from MLS point clouds. In addition, a comparative study is conducted to indicate the better performance of the proposed road marking extraction method than that of three other existing methods. In conclusion, this study makes a considerable contribution to the research on generating transition lines for HDRMs, which further contributes to the research of AVs

    Building structural characterization using mobile terrestrial point cloud for flood risk anticipation

    Get PDF
    Compte tenu de la fréquence élevée et de l'impact majeur des inondations, les décideurs, les acteurs des municipalités et le ministère de la sécurité publique ont un besoin urgent de disposer d'outils permettant de prédire ou d'évaluer l'importance des inondations et leur impact sur la population. D'après les statistiques, le premier étage des bâtiments, ainsi que les ouvertures inférieures, sont plus susceptibles de subir des dommages lors d'une inondation. Ainsi, dans le cadre de l'évaluation de l'impact des inondations, il serait nécessaire d'identifier l'emplacement de l'ouverture la plus basse des bâtiments et surtout sa hauteur par rapport au sol. Le système de balayage laser mobile (MLS) monté sur un véhicule s'est avéré être l'une des sources les plus fiables pour caractériser les bâtiments. Il peut produire des millions de points géoréférencés en 3D avec un niveau de détail suffisant, grâce à son point de vue depuis la rue et sa proximité. De plus, l'augmentation du nombre de jeux de données, issues des MLS acquis dans les villes et les environnements ruraux, permet de développer des approches pour caractériser les maisons résidentielles à l'échelle provinciale. Plusieurs défis sont associés à l'extraction d'informations descriptives des façades de bâtiments à l'aide de données MLS. Ainsi, les occlusions devant une façade rendent impossible l'obtention de points 3D sur ces parties de la façade. Aussi, comme les fenêtres sont principalement constituées de verre, qui ne réfléchit pas les signaux laser, les points disponibles pour celles-ci sont généralement limités. De plus, les approches de détection exploitent la répétitivité et les positions symétriques des ouvertures sur la façade. Mais ces caractéristiques sont absentes pour des maisons rurales et résidentielles. Finalement, la variabilité de la densité de points dans les données MLS rend difficile le processus de détection lorsqu'on travaille à l'échelle d'une ville. Par conséquent, l'objectif principal de cette recherche est de concevoir et de développer une approche globale d'extraction efficace des ouvertures présentes sur une façade. La solution proposée se compose de trois phases: l'extraction des façades, la détection des ouvertures et l'identification des occlusions. La première phase utilise une approche de segmentation adaptative par croissance de régions pour extraire la boîte englobante 3D de la façade. La deuxième phase combine la détection de trous avec une technique de maillage pour extraire les boîtes englobantes 2D des ouvertures. La dernière phase, qui vise à discriminer les occlusions des ouvertures, est en cours d'achèvement. Des évaluations qualitatives et quantitatives ont été réalisées à l'aide d'un jeu de données réelles, fourni par Jakarto Cartographie 3D Inc., de la province de Québec, au Canada. Les statistiques ont révélé que l'approche proposée pouvait obtenir de bons taux de performance malgré la complexité du jeu de données, représentatif des données acquises en situation réelle. Les défis concernant l'auto-occlusion de certaines façades et la présence de grandes occlusions environnantes seront à étudier plus en profondeur afin d'obtenir des informations plus précises sur les ouvertures des façades.Given the high frequency and major impact of floods, decision-makers, stakeholders in municipalities and public security ministry are in the urgent need to have tools allowing to predict or assess the significance of flood events and their impact on the population. Based on statistics, the first floor of the buildings, as well as the lower openings, are more likely subject to potential damage during a flood event. Thus, in the context of flood impact assessment, it would be required identifying the location of the buildings' lowest opening and especially its height above the ground. The capacity to characterize building with a relevant level of detail depends on the data sources used for the modeling. Different sources of data have been employed to characterize buildings' façade and openings. Mobile Laser Scanning (MLS) system mounted on a vehicle has proved to be one of the most reliable sources in this domain. It can produce millions of 3D georeferenced points with sufficient level of detail of the building facades and its openings, due to its street-view and close-range distance. Moreover, the increase of MLS providers and acquisitions in towns and rural environments, makes it possible to develop approaches to characterize residential houses at a provincial scale. Although being effective, several challenges are associated with extracting descriptive information of building facades using MLS data. The presence of occlusion in front of a facade makes it impossible to obtain the 3D points of the covered parts of the facade. Given the fact that windows mostly consist of glass and laser signals could not be reflected from the glass, limited points are usually available for windows. While the repetitive pattern and symmetrical positions of the openings on the facade makes it easier for the detection system to extract them, this characteristic is missing on the facade on rural and residential houses. The inconsistency of the point density in MLS data make the detection process even harder when working at city scale. Accordingly, the main objective of this research is to design and develop a comprehensive approach that effectively extracts facade openings. In order to meet the research project objective, the proposed solution consists of three phases including facade extraction, opening detection, and occlusion recognition. The first phase employs an adaptive region growing segmentation approach to extract the 3D bounding box of the facade. The second phase combines a hole-based assumption with an XZ gridding technique to extract 2D bounding boxes of the openings. The last phase which recognizes holes related to the occlusion from the openings is currently being completed. Qualitative and quantitative evaluations were performed using a real-word dataset provided by Jakarto Cartographie 3D inc. of the Quebec Province, Canada. Statistics revealed that the proposed approach could obtain good performance rates despite the complexity of the dataset, representative of the data acquired in real situations. Challenges regarding facade's self-occlusion and the presence of large surrounding occlusions should be further investigated for obtaining more accurate opening information on the facade
    corecore