8 research outputs found

    Design, Implementation, and Configuration of Laser Systems for Vehicle Detection and Classification in Real Time

    Full text link
    [EN] The use of real-time vehicle detection and classification systems is essential for the accurate management of traffic and road infrastructure. Over time, diverse systems have been proposed for it, such as the widely known magnetic loops or microwave radars. However, these types of sensors do not offer all the information currently required for exhaustive and comprehensive traffic control. Thus, this paper presents the design, implementation, and configuration of laser systems to obtain 3D profiles of vehicles, which collect more precise information about the state of the roads. Nevertheless, to obtain reliable information on vehicle traffic by means of these systems, it is fundamental to correctly carry out a series of preliminary steps: choose the most suitable type of laser, select its configuration properly, determine the optimal location, and process the information provided accurately. Therefore, this paper details a series of criteria to help make these crucial and difficult decisions. Furthermore, following these guidelines, a complete laser system implemented for vehicle detection and classification is presented as result, which is characterized by its versatility and the ability to control up to four lanes in real time.This research has been funded by the Universitat Politecnica de Valencia through its internal project `Equipos de deteccion, regulacion e informacion en el sector de los sistemas inteligentes de transporte (ITS). Nuevos modelos y ensayos de compatibilidad y verificacion de funcionamiento', which has been carried out at the ITACA Institute.Gallego Ripoll, N.; Gómez Aguilera, LE.; Mocholí-Belenguer, F.; Mocholí Salcedo, A.; Ballester Merelo, FJ. (2021). Design, Implementation, and Configuration of Laser Systems for Vehicle Detection and Classification in Real Time. Sensors. 21(6):1-18. https://doi.org/10.3390/s21062082S11821

    Down-sampling of large lidar dataset in the context of off-road objects extraction

    Get PDF
    Nowadays, LiDAR (Light Detection and Ranging) is used in many fields, such as transportation. Thanks to the recent technological improvements, the current generation of LiDAR mapping instruments available on the market allows to acquire up to millions of three-dimensional (3D) points per second. On the one hand, such improvements allowed the development of LiDAR-based systems with increased productivity, enabling the quick acquisition of detailed 3D descriptions of the objects of interest. However, on the other hand, the extraction of the information of interest from such huge amount of acquired data can be quite challenging and time demanding. Motivated by such observation, this paper proposes the use of the Optimum Dataset method in order to ease and speed up the information extraction phase by significantly reducing the size of the acquired dataset while preserving (retain) the information of interest. This paper focuses on the data reduction of LiDAR datasets acquired on roads, with the goal of extraction the off-road objects. Mostly motivated by the need of mapping roads and quickly determining car position along a road, the development of efficient methods for the extraction of such kind of information is becoming a hot topic in the research community

    Revue des descripteurs tridimensionnels (3D) pour la catégorisation des nuages de points acquis avec un système LiDAR de télémétrie mobile

    Get PDF
    La compréhension de nuage de points LiDAR consiste à reconnaitre les objets qui sont présents dans la scène et à associer des interprétations aux nuages d’objets qui le composent. Les données LiDAR acquises en milieu urbain dans des environnements à grande échelle avec des systèmes terrestres de télémétrie mobile présentent plusieurs difficultés propres à ce contexte : chevauchement entre les nuages de points, occlusions entre les objets qui ne sont vus que partiellement, variations de la densité des points. Compte tenu de ces difficultés, beaucoup de descripteurs tridimensionnels (3D) proposés dans la littérature pour la classification et la reconnaissance d’objets voient leurs performances se dégrader dans ce contexte applicatif, car ils ont souvent été introduits et évalués avec des jeux de données portant sur de petits objets. De plus, il y a un manque de comparaison approfondie entre les descripteurs 3D mis en œuvre dans des environnements à grande échelle ce qui a pour conséquence un manque de connaissance au moment de sélectionner le descripteur 3D le plus adapté à un nuage de points LiDAR acquis dans de tels environnements. Le présent article propose une revue approfondie des travaux portant sur l’application des descripteurs 3D à des données LiDAR acquises en milieu urbain dans des environnements à grande échelle avec des systèmes terrestres de télémétrie mobile. Les principaux descripteurs 3D appliqués dans de tels contextes sont ainsi recensés. Une synthèse de leurs performances et limites est ensuite effectuée de manière comparative sur la base des travaux disponibles dans la littérature. Enfin, une discussion abordant les éléments impactant le plus les performances des descripteurs et des pistes d’amélioration vient compléter cette revue.Understanding a LiDAR point cloud entails recognizing the objects present in the scene and associating interpretations to the object clouds that make it up. LiDAR data acquired in a large-scale urban setting with landbased mobile telemetry systems present several challenges specific to this context: overlapping point clouds, occlusion between objects that are seen only partially, variations in point density. Given these challenges, many of the 3D descriptors proposed in literature for classifying and recognizing objects see their performance degrade in this application context, because they were often introduced and assessed with datasets dealing with small objects. In addition, there is a lack of thorough comparison between the 3D descriptors implemented in large-scale environments, which induces a lack of knowledge when the time comes to select the 3D descriptor best adapted to a LiDAR point cloud acquired in such an environment. This article proposes an in-depth review of works on the application of 3D descriptors to LiDAR data acquired in a large-scale urban setting through land-based mobile telemetry systems. The key 3D descriptors applied in such a context are thus inventoried. A comparative synthesis of their performance and limits is then performed on the basis of the works available in literature. Finally, a discussion on the elements having the biggest impact on the descriptors’ performances and on improvement leads completes this review

    Urban Roadside Tree Inventory Using a Mobile Laser Scanning System

    Get PDF
    the road environment. Thus, effective methods are needed for the MLS data processing. The main goal of this thesis is to establish a feasible workflow by testing a series of methods to extract geometrical information of roadside trees from the MLS-acquired point clouds. The workflow developed in this study consists of three parts. The first part deals with ground point removal. As such, only off-ground points are used to extract trees. The second part handles tree detection by comparing four segmentation and clustering methods: the Euclidian distance clustering algorithm, the region growing segmentation method, the normalized cut (Ncut) method, and the supervoxel-based tree detection method. The third part focuses on automated extraction of tree geometric parameters such as tree height, DBH, crown spread, and horizontal slices features. Finally, classification of tree species was conducted using the k-Nearest Neighbour (k-NN) and the random forests (RF) algorithm. A total of four MLS datasets (three in Xiamen, China and one in Kingston, Ontario) acquired in iv 2013 and 2015, respectively, were used to test the developed method. The ground truthing data of DBH estimation were obtained through manual measurement of selected roadside trees after the two MLS missions in Xiamen in the fall 2015. The field surveyed DBH values of the 163 roadside trees were used to estimate the accuracy of the proposed tree extraction method. The 200 manually labeled trees with 8 different species were selected to examine accuracy of the proposed classification method. The results show that over 90% of the roadside trees were correctly detected, with an average error of about 5% in DBH estimation when compared to the field survey, and an overall accuracy of 78% for the classification of tree species

    Semi-automated Generation of High-accuracy Digital Terrain Models along Roads Using Mobile Laser Scanning Data

    Get PDF
    Transportation agencies in many countries require high-accuracy (2-20 cm) digital terrain models (DTMs) along roads for various transportation related applications. Compared to traditional ground surveys and aerial photogrammetry, mobile laser scanning (MLS) has great potential for rapid acquisition of high-density and high-accuracy three-dimensional (3D) point clouds covering roadways. Such MLS point clouds can be used to generate high-accuracy DTMs in a cost-effective fashion. However, the large-volume, mixed-density and irregular-distribution of MLS points, as well as the complexity of the roadway environment, make DTM generation a very challenging task. In addition, most available software packages were originally developed for handling airborne laser scanning (ALS) point clouds, which cannot be directly used to process MLS point clouds. Therefore, methods and software tools to automatically generate DTMs along roads are urgently needed for transportation users. This thesis presents an applicable workflow to generate DTM from MLS point clouds. The entire strategy of DTM generation was divided into two main parts: removing non-ground points and interpolating ground points into gridded DTMs. First, a voxel-based upward growing algorithm was developed to effectively and accurately remove non-ground points. Then through a comparative study on four interpolation algorithms, namely Inverse Distance Weighted (IDW), Nearest Neighbour, Linear, and Natural Neighbours interpolation algorithms, the IDW interpolation algorithm was finally used to generate gridded DTMs due to its higher accuracy and higher computational efficiency. The obtained results demonstrated that the voxel-based upward growing algorithm is suitable for areas without steep terrain features. The average overall accuracy, correctness, and completeness values of this algorithm were 0.975, 0.980, and 0.986, respectively. In some cases, the overall accuracy can exceed 0.990. The results demonstrated that the semi-automated DTM generation method developed in this thesis was able to create DTMs with a centimetre-level grid size and 10 cm vertical accuracy using the MLS point clouds

    자율 주행을 위한 3D Point Cloud Data 기반 물체 탐지 및 분류 기법에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 서승우.A 3D LIDAR provides 3D surface information of objects with the highest position accuracy, among available sensors that can be utilized to develop perception algorithms for automated driving vehicles. In terms of automated driving, the accurate surface information gives the following benefits: 1) the accurate position information that is quite useful itself for collision avoidance is stably provided regardless of illumination condition, because the LIDAR is an active sensor. 2) the surface information can provide precise 3D shape-oriented features for object classification. Motivated by these characteristics, we propose three algorithms for a perception purpose of automated driving vehicles based on the 3D LIDAR in this dissertation. A very first procedure to utilize the 3D LIDAR as a perception sensor is segmentation that transform a stream of the LIDAR measurements into multiple point groups, where each point group indicate an individual object near the sensor. In chapter 2, a real-time and accurate segmentation is proposed. In particular, Gaussian Process regression is used to solve a problem called over-segmentation that increases False Positives by partitioning an object into multiple portions. The segmentation result can be utilized as input of another perception algorithm, such as object classification that is required for designing more human-likely driving strategies. For example, it is important to recognize pedestrians in urban driving environments because avoiding collisions with pedestrians are nearly a top priority. In chapter 3, we propose a pedestrian recognition algorithm based on a Deep Neural Network architecture that learns appearance variation. Another traffic participant that should be recognized with high-priority is a vehicle. Because various vehicle types of which appearances differ, such as a sedan, a bus, or a truck, are present on road, detection of the vehicles with similar performance regardless of the types is necessary. In chapter 4, we propose an algorithm that makes use of a common appearance of vehicles to solve the problem. To improve performance, a monocular camera is additionally employed, where the information from both sensors are integrated by a Dempster-Shafer Theory framework.Chapter 1 Introduction 1 1.1 Background and Motivations 1 1.2 Contributions and Outline of the Dissertation 3 1.2.1 Real-time and Accurate Segmentation of 3D Point Clouds based on Gaussian Process Regression 3 1.2.2 Pedestrian Recognition Based on Appearance Variation Learning 4 1.2.3 Vehicle Recognition using a Common Appearance Captured by a 3D LIDAR and a Monocular Camera 5 Chapter 2 Real-time and Accurate Segmentation of 3D Point Clouds based on Gaussian Process Regression 6 2.1 Introduction 6 2.2 Related Work 10 2.3 Framework overview 15 2.4 Clustering of Non-ground Points 16 2.4.1 Graph Construction 17 2.4.2 Clustering of Points on Vertical Surface 17 2.4.3 Cluster Extension 21 2.5 Accuracy Enhancement 24 2.5.1 Approach to Handling Over-segmentation 26 2.5.2 Handling Over-segmentation with GP Regression 27 2.5.3 Learning Hyperparameters 31 2.6 Experiments 32 2.6.1 Experiment Environment 32 2.6.2 Evaluation Metrics 33 2.6.3 Processing Time 36 2.6.4 Accuracy on Various Driving Environments 37 2.6.5 Impact on Tracking 46 2.7 Conclusion 48 Chapter 3 Pedestrian recognition based on appearance variation learning 50 3.1 Introduction 50 3.2 Related Work 53 3.3 Appearance Variation Learning 56 3.3.1 Primal Input Data for the Proposed Architecture 57 3.3.2 Learning Spatial Features from Appearance 57 3.3.3 Learning Appearance Variation 59 3.3.4 Classification 61 3.3.5 Data Augmentation 61 3.3.6 Implementation Detail 61 3.4 EXPERIMENTS 62 3.4.1 Experimental Environment 62 3.4.2 Experimental Results 65 3.5 CONCLUSIONS AND FUTURE WORKS 70 Chapter 4 Vehicle Recognition using a Common Appearance Captured by a 3D LIDAR and a Monocular Camera 72 4.1 Introduction 72 4.2 Related Work 75 4.3 Vehicle Recognition 77 4.3.1 Point Cloud Processing 78 4.3.2 Image Processing 80 4.3.3 Dempster-Shafer Theory (DST) for Information Fusion 82 4.4 Experiments 84 4.5 Conclusion 87 Chapter 5 Conclusion 89 Bibliography 91 국문초록 105Docto
    corecore