59 research outputs found
Review of Automatic Processing of Topography and Surface Feature Identification LiDAR Data Using Machine Learning Techniques
Machine Learning (ML) applications on Light Detection And Ranging (LiDAR) data have provided promising results and thus this topic has been widely addressed in the literature during the last few years. This paper reviews the essential and the more recent completed studies in the topography and surface feature identification domain. Four areas, with respect to the suggested approaches, have been analyzed and discussed: the input data, the concepts of point cloud structure for applying ML, the ML techniques used, and the applications of ML on LiDAR data. Then, an overview is provided to underline the advantages and the disadvantages of this research axis. Despite the training data labelling problem, the calculation cost, and the undesirable shortcutting due to data downsampling, most of the proposed methods use supervised ML concepts to classify the downsampled LiDAR data. Furthermore, despite the occasional highly accurate results, in most cases the results still require filtering. In fact, a considerable number of adopted approaches use the same data structure concepts employed in image processing to profit from available informatics tools. Knowing that the LiDAR point clouds represent rich 3D data, more effort is needed to develop specialized processing tools
Density-Aware Convolutional Networks with Context Encoding for Airborne LiDAR Point Cloud Classification
To better address challenging issues of the irregularity and inhomogeneity
inherently present in 3D point clouds, researchers have been shifting their
focus from the design of hand-craft point feature towards the learning of 3D
point signatures using deep neural networks for 3D point cloud classification.
Recent proposed deep learning based point cloud classification methods either
apply 2D CNN on projected feature images or apply 1D convolutional layers
directly on raw point sets. These methods cannot adequately recognize
fine-grained local structures caused by the uneven density distribution of the
point cloud data. In this paper, to address this challenging issue, we
introduced a density-aware convolution module which uses the point-wise density
to re-weight the learnable weights of convolution kernels. The proposed
convolution module is able to fully approximate the 3D continuous convolution
on unevenly distributed 3D point sets. Based on this convolution module, we
further developed a multi-scale fully convolutional neural network with
downsampling and upsampling blocks to enable hierarchical point feature
learning. In addition, to regularize the global semantic context, we
implemented a context encoding module to predict a global context encoding and
formulated a context encoding regularizer to enforce the predicted context
encoding to be aligned with the ground truth one. The overall network can be
trained in an end-to-end fashion with the raw 3D coordinates as well as the
height above ground as inputs. Experiments on the International Society for
Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark demonstrated
the superiority of the proposed method for point cloud classification. Our
model achieved a new state-of-the-art performance with an average F1 score of
71.2% and improved the performance by a large margin on several categories
3D detection of roof sections from a single satellite image and application to LOD2-building reconstruction
Reconstructing urban areas in 3D out of satellite raster images has been a
long-standing and challenging goal of both academical and industrial research.
The rare methods today achieving this objective at a Level Of Details rely
on procedural approaches based on geometry, and need stereo images and/or LIDAR
data as input. We here propose a method for urban 3D reconstruction named
KIBS(\textit{Keypoints Inference By Segmentation}), which comprises two novel
features: i) a full deep learning approach for the 3D detection of the roof
sections, and ii) only one single (non-orthogonal) satellite raster image as
model input. This is achieved in two steps: i) by a Mask R-CNN model performing
a 2D segmentation of the buildings' roof sections, and after blending these
latter segmented pixels within the RGB satellite raster image, ii) by another
identical Mask R-CNN model inferring the heights-to-ground of the roof
sections' corners via panoptic segmentation, unto full 3D reconstruction of the
buildings and city. We demonstrate the potential of the KIBS method by
reconstructing different urban areas in a few minutes, with a Jaccard index for
the 2D segmentation of individual roof sections of and on
our two data sets resp., and a height's mean error of such correctly segmented
pixels for the 3D reconstruction of m and m on our two data sets
resp., hence within the LOD2 precision range
Road Information Extraction from Mobile LiDAR Point Clouds using Deep Neural Networks
Urban roads, as one of the essential transportation infrastructures, provide considerable motivations for rapid urban sprawl and bring notable economic and social benefits. Accurate and efficient extraction of road information plays a significant role in the development of autonomous vehicles (AVs) and high-definition (HD) maps. Mobile laser scanning (MLS) systems have been widely used for many transportation-related studies and applications in road inventory, including road object detection, pavement inspection, road marking segmentation and classification, and road boundary extraction, benefiting from their large-scale data coverage, high surveying flexibility, high measurement accuracy, and reduced weather sensitivity. Road information from MLS point clouds is significant for road infrastructure planning and maintenance, and have an important impact on transportation-related policymaking, driving behaviour regulation, and traffic efficiency enhancement.
Compared to the existing threshold-based and rule-based road information extraction methods, deep learning methods have demonstrated superior performance in 3D road object segmentation and classification tasks. However, three main challenges remain that impede deep learning methods for precisely and robustly extracting road information from MLS point clouds. (1) Point clouds obtained from MLS systems are always in large-volume and irregular formats, which has presented significant challenges for managing and processing such massive unstructured points. (2) Variations in point density and intensity are inevitable because of the profiling scanning mechanism of MLS systems. (3) Due to occlusions and the limited scanning range of onboard sensors, some road objects are incomplete, which considerably degrades the performance of threshold-based methods to extract road information.
To deal with these challenges, this doctoral thesis proposes several deep neural networks that encode inherent point cloud features and extract road information. These novel deep learning models have been tested by several datasets to deliver robust and accurate road information extraction results compared to state-of-the-art deep learning methods in complex urban environments. First, an end-to-end feature extraction framework for 3D point cloud segmentation is proposed using dynamic point-wise convolutional operations at multiple scales. This framework is less sensitive to data distribution and computational power. Second, a capsule-based deep learning framework to extract and classify road markings is developed to update road information and support HD maps. It demonstrates the practical application of combining capsule networks with hierarchical feature encodings of georeferenced feature images. Third, a novel deep learning framework for road boundary completion is developed using MLS point clouds and satellite imagery, based on the U-shaped network and the conditional deep convolutional generative adversarial network (c-DCGAN). Empirical evidence obtained from experiments compared with state-of-the-art methods demonstrates the superior performance of the proposed models in road object semantic segmentation, road marking extraction and classification, and road boundary completion tasks
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Towards Efficient 3D Reconstructions from High-Resolution Satellite Imagery
Recent years have witnessed the rapid growth of commercial satellite imagery. Compared with other imaging products, such as aerial or streetview imagery, modern satellite images are captured at high resolution and with multiple spectral bands, thus provide unique viewing angles, global coverage, and frequent updates of the Earth surfaces. With automated processing and intelligent analysis algorithms, satellite images can enable global-scale 3D modeling applications.
This dissertation explores computer vision algorithms to reconstruct 3D models from satellite images at different levels: geometric, semantic, and parametric reconstructions. However, reconstructing satellite imagery is particularly challenging for the following reasons: 1) Satellite images typically contain an enormous amount of raw pixels. Efficient algorithms are needed to minimize the substantial computational burden. 2) The ground sampling distances of satellite images are comparatively low. Visual entities, such as buildings, appear visually small and cluttered, thus posing difficulties for 3D modeling. 3) Satellite images usually have complex camera models and inaccurate vendor-provided camera calibrations. Rational polynomial coefficients (RPC) camera models, although widely used, need to be appropriately handled to ensure high-quality reconstructions.
To obtain geometric reconstructions efficiently, we propose an edge-aware interpolation-based algorithm to obtain 3D point clouds from satellite image pairs. Initial 2D pixel matches are first established and triangulated to compensate the RPC calibration errors. Noisy dense correspondences can then be estimated by interpolating the inlier matches in an edge-aware manner. After refining the correspondence map with a fast bilateral solver, we can obtain dense 3D point clouds via triangulation.
Pixel-wise semantic classification results for satellite images are usually noisy due to the negligence of spatial neighborhood information. Thus, we propose to aggregate multiple corresponding observations of the same 3D point to obtain high-quality semantic models. Instead of just leveraging geometric reconstructions to provide such correspondences, we formulate geometric modeling and semantic reasoning in a joint Markov Random Field (MRF) model. Our experiments show that both tasks can benefit from the joint inference.
Finally, we propose a novel deep learning based approach to perform single-view parametric reconstructions from satellite imagery. By parametrizing buildings as 3D cuboids, our method simultaneously localizes building instances visible in the image and estimates their corresponding cuboid models. Aerial LiDAR and vectorized GIS maps are utilized as supervision. Our network upsamples CNN features to detect small but cluttered building instances. In addition, we estimate building contours through a separate fully convolutional network to avoid overlapping building cuboids.Doctor of Philosoph
Label Efficient 3D Scene Understanding
3D scene understanding models are becoming increasingly integrated into modern society. With applications ranging from autonomous driving, Augmented Real- ity, Virtual Reality, robotics and mapping, the demand for well-behaved models is rapidly increasing. A key requirement for training modern 3D models is high- quality manually labelled training data. Collecting training data is often the time and monetary bottleneck, limiting the size of datasets. As modern data-driven neu- ral networks require very large datasets to achieve good generalisation, finding al- ternative strategies to manual labelling is sought after for many industries.
In this thesis, we present a comprehensive study on achieving 3D scene under- standing with fewer labels. Specifically, we evaluate 4 approaches: existing data, synthetic data, weakly-supervised and self-supervised. Existing data looks at the potential of using readily available national mapping data as coarse labels for train- ing a building segmentation model. We further introduce an energy-based active contour snake algorithm to improve label quality by utilising co-registered LiDAR data. This is attractive as whilst the models may still require manual labels, these labels already exist. Synthetic data also exploits already existing data which was not originally designed for training neural networks. We demonstrate a pipeline for generating a synthetic Mobile Laser Scanner dataset. We experimentally evalu- ate if such a synthetic dataset can be used to pre-train smaller real-world datasets, increasing the generalisation with less data.
A weakly-supervised approach is presented which allows for competitive per- formance on challenging real-world benchmark 3D scene understanding datasets with up to 95% less data. We propose a novel learning approach where the loss function is learnt. Our key insight is that the loss function is a local function and therefore can be trained with less data on a simpler task. Once trained our loss function can be used to train a 3D object detector using only unlabelled scenes. Our method is both flexible and very scalable, even performing well across datasets.
Finally, we propose a method which only requires a single geometric represen- tation of each object class as supervision for 3D monocular object detection. We discuss why typical L2-like losses do not work for 3D object detection when us- ing differentiable renderer-based optimisation. We show that the undesirable local- minimas that the L2-like losses fall into can be avoided with the inclusion of a Generative Adversarial Network-like loss. We achieve state-of-the-art performance on the challenging 6DoF LineMOD dataset, without any scene level labels
Multimodal perception for autonomous driving
Mención Internacional en el tÃtulo de doctorAutonomous driving is set to play an important role among intelligent
transportation systems in the coming decades. The advantages
of its large-scale implementation –reduced accidents, shorter commuting
times, or higher fuel efficiency– have made its development a priority
for academia and industry. However, there is still a long way to
go to achieve full self-driving vehicles, capable of dealing with any
scenario without human intervention. To this end, advances in control,
navigation and, especially, environment perception technologies
are yet required. In particular, the detection of other road users that
may interfere with the vehicle’s trajectory is a key element, since it
allows to model the current traffic situation and, thus, to make decisions
accordingly.
The objective of this thesis is to provide solutions to some of
the main challenges of on-board perception systems, such as extrinsic
calibration of sensors, object detection, and deployment on
real platforms. First, a calibration method for obtaining the relative
transformation between pairs of sensors is introduced, eliminating
the complex manual adjustment of these parameters. The algorithm
makes use of an original calibration pattern and supports LiDARs,
and monocular and stereo cameras. Second, different deep learning
models for 3D object detection using LiDAR data in its bird’s eye
view projection are presented. Through a novel encoding, the use
of architectures tailored to image detection is proposed to process
the 3D information of point clouds in real time. Furthermore, the
effectiveness of using this projection together with image features is
analyzed. Finally, a method to mitigate the accuracy drop of LiDARbased
detection networks when deployed in ad-hoc configurations is
introduced. For this purpose, the simulation of virtual signals mimicking
the specifications of the desired real device is used to generate
new annotated datasets that can be used to train the models.
The performance of the proposed methods is evaluated against
other existing alternatives using reference benchmarks in the field of
computer vision (KITTI and nuScenes) and through experiments in
open traffic with an automated vehicle. The results obtained demonstrate
the relevance of the presented work and its suitability for commercial
use.La conducción autónoma está llamada a jugar un papel importante en
los sistemas inteligentes de transporte de las próximas décadas. Las
ventajas de su implementación a larga escala –disminución de accidentes,
reducción del tiempo de trayecto, u optimización del consumo–
han convertido su desarrollo en una prioridad para la academia y
la industria. Sin embargo, todavÃa hay un largo camino por delante
hasta alcanzar una automatización total, capaz de enfrentarse a cualquier
escenario sin intervención humana. Para ello, aún se requieren
avances en las tecnologÃas de control, navegación y, especialmente,
percepción del entorno. Concretamente, la detección de otros usuarios
de la carretera que puedan interferir en la trayectoria del vehÃculo
es una pieza fundamental para conseguirlo, puesto que permite modelar
el estado actual del tráfico y tomar decisiones en consecuencia.
El objetivo de esta tesis es aportar soluciones a algunos de los
principales retos de los sistemas de percepción embarcados, como
la calibración extrÃnseca de los sensores, la detección de objetos, y su
despliegue en plataformas reales. En primer lugar, se introduce un
método para la obtención de la transformación relativa entre pares
de sensores, eliminando el complejo ajuste manual de estos parámetros.
El algoritmo hace uso de un patrón de calibración propio y da
soporte a cámaras monoculares, estéreo, y LiDAR. En segundo lugar,
se presentan diferentes modelos de aprendizaje profundo para la detección
de objectos en 3D utilizando datos de escáneres LiDAR en su
proyección en vista de pájaro. A través de una nueva codificación, se
propone la utilización de arquitecturas de detección en imagen para
procesar en tiempo real la información tridimensional de las nubes
de puntos. Además, se analiza la efectividad del uso de esta proyección
junto con caracterÃsticas procedentes de imágenes. Por último,
se introduce un método para mitigar la pérdida de precisión de las
redes de detección basadas en LiDAR cuando son desplegadas en
configuraciones ad-hoc. Para ello, se plantea la simulación de señales
virtuales con las caracterÃsticas del modelo real que se quiere utilizar,
generando asà nuevos conjuntos anotados para entrenar los modelos.
El rendimiento de los métodos propuestos es evaluado frente a
otras alternativas existentes haciendo uso de bases de datos de referencia
en el campo de la visión por computador (KITTI y nuScenes),
y mediante experimentos en tráfico abierto empleando un vehÃculo
automatizado. Los resultados obtenidos demuestran la relevancia de
los trabajos presentados y su viabilidad para un uso comercial.Programa de Doctorado en IngenierÃa Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Jesús GarcÃa Herrero.- Secretario: Ignacio Parra Alonso.- Vocal: Gustavo Adolfo Peláez Coronad
Rethinking Range View Representation for LiDAR Segmentation
LiDAR segmentation is crucial for autonomous driving perception. Recent
trends favor point- or voxel-based methods as they often yield better
performance than the traditional range view representation. In this work, we
unveil several key factors in building powerful range view models. We observe
that the "many-to-one" mapping, semantic incoherence, and shape deformation are
possible impediments against effective learning from range view projections. We
present RangeFormer -- a full-cycle framework comprising novel designs across
network architecture, data augmentation, and post-processing -- that better
handles the learning and processing of LiDAR point clouds from the range view.
We further introduce a Scalable Training from Range view (STR) strategy that
trains on arbitrary low-resolution 2D range images, while still maintaining
satisfactory 3D segmentation accuracy. We show that, for the first time, a
range view method is able to surpass the point, voxel, and multi-view fusion
counterparts in the competing LiDAR semantic and panoptic segmentation
benchmarks, i.e., SemanticKITTI, nuScenes, and ScribbleKITTI.Comment: ICCV 2023; 24 pages, 10 figures, 14 tables; Webpage at
https://ldkong.com/RangeForme
- …