45 research outputs found

    Development of a novel data acquisition and processing methodology applied to the boresight alignment of marine mobile LiDAR systems

    Get PDF
    Le système LiDAR mobile (SLM) est une technologie d'acquisition de données de pointe qui permet de cartographier les scènes du monde réel en nuages de points 3D. Les applications du SLM sont très vastes, de la foresterie à la modélisation 3D des villes, en passant par l'évaluation de l'inventaire routier et la cartographie des infrastructures portuaires. Le SLM peut également être monté sur diverses plateformes, telles que des plateformes aériennes, terrestres, marines, etc. Indépendamment de l'application et de la plateforme, pour s'assurer que le SLM atteigne sa performance optimale et sa meilleure précision, il est essentiel de traiter correctement les erreurs systématiques du système, spécialement l'erreur des angles de visée à laquelle on s'intéresse particulièrement dans cette thèse. L'erreur des angles de visée est définie comme le désalignement rotationnel des deux parties principales du SLM, le système de positionnement et d'orientation et le scanneur LiDAR, introduit par trois angles de visée. En fait, de petites variations angulaires dans ces paramètres peuvent causer des problèmes importants d'incertitude géométrique dans le nuage de points final et il est vital d'employer une méthode d'alignement pour faire face à la problématique de l'erreur des angles de visée de ces systèmes. La plupart des méthodes existantes d'alignement des angles de visée qui ont été principalement développées pour les SLM aériens et terrestres, tirent profit d'éléments in-situ spécifiques et présents sur les sites de levés et adéquats pour ces méthodes. Par exemple, les éléments linéaires et planaires extraits des toits et des façades des maisons. Cependant, dans les environnements sans présence de ces éléments saillants comme la forêt, les zones rurales, les ports, où l'accès aux éléments appropriées pour l'alignement des angles de visée est presque impossible, les méthodes existantes fonctionnent mal, voire même pas du tout. Par conséquent, cette recherche porte sur l'alignement des angles de visée d'un SLM dans un environnement complexe. Nous souhaitons donc introduire une procédure d'acquisition et traitement pour une préparation adéquate des données, qui servira à la méthode d'alignement des angles de visée du SLM. Tout d'abord, nous explorons les différentes possibilités des éléments utilisés dans les méthodes existantes qui peuvent aider à l'identification de l'élément offrant le meilleur potentiel pour l'estimation des angles de visée d'un SLM. Ensuite, nous analysons, parmi un grand nombre de possibles configurations d'éléments (cibles) et patrons de lignes de balayage, celle qui nous apparaît la meilleure. Cette analyse est réalisée dans un environnement de simulation dans le but de générer différentes configurations de cibles et de lignes de balayage pour l'estimation des erreurs des angles de visée afin d'isoler la meilleure configuration possible. Enfin, nous validons la configuration proposée dans un scénario réel, soit l'étude de cas du port de Montréal. Le résultat de la validation révèle que la configuration proposée pour l'acquisition et le traitement des données mène à une méthode rigoureuse d'alignement des angles de visée qui est en même temps précise, robuste et répétable. Pour évaluer les résultats obtenus, nous avons également mis en œuvre une méthode d'évaluation de la précision relative, qui démontre l'amélioration de la précision du nuage de points après l'application de la procédure d'alignement des angles de visée.A Mobile LiDAR system (MLS) is a state-of-the-art data acquisition technology that maps real-world scenes in the form of 3D point clouds. The MLS's list of applications is vast, from forestry to 3D city modeling and road inventory assessment to port infrastructure mapping. The MLS can also be mounted on various platforms, such as aerial, terrestrial, marine, and so on. Regardless of the application and the platform, to ensure that the MLS achieves its optimal performance and best accuracy, it is essential to adequately address the systematic errors of the system, especially the boresight error. The boresight error is the rotational misalignment offset of the two main parts of the MLS, the positioning and orientation system (POS) and the LiDAR scanner. Minor angular parameter variations can cause important geometric accuracy issues in the final point cloud. Therefore, it is vital to employ an alignment method to cope with the boresight error problem of such systems. Most of the existing boresight alignment methods, which have been mainly developed for aerial and terrestrial MLS, take advantage of the in-situ tie-features in the environment that are adequate for these methods. For example, tie-line and tie-plane are extracted from building roofs and facades. However, in low-feature environments like forests, rural areas, ports, and harbors, where access to suitable tie-features for boresight alignment is nearly impossible, the existing methods malfunction or do not function. Therefore, this research addresses the boresight alignment of a marine MLS in a low-feature maritime environment. Thus, we aim to introduce an acquisition procedure for suitable data preparation, which will serve as input for the boresight alignment method of a marine MLS. First, we explore various tie-features introduced in the existing ways that eventually assist in the identification of the suitable tie-feature for the boresight alignment of a marine MLS. Second, we study the best configuration for the data acquisition procedure, i.e., tie-feature(s) characteristics and the necessary scanning line pattern. This study is done in a simulation environment to achieve the best visibility of the boresight errors on the selected suitable tie-feature. Finally, we validate the proposed configuration in a real-world scenario, which is the port of Montreal case study. The validation result reveals that the proposed data acquisition and processing configuration results in an accurate, robust, and repeatable rigorous boresight alignment method. We have also implemented a relative accuracy assessment to evaluate the obtained results, demonstrating an accuracy improvement of the point cloud after the boresight alignment procedure

    Automated Extraction of Road Information from Mobile Laser Scanning Data

    Get PDF
    Effective planning and management of transportation infrastructure requires adequate geospatial data. Existing geospatial data acquisition techniques based on conventional route surveys are very time consuming, labor intensive, and costly. Mobile laser scanning (MLS) technology enables a rapid collection of enormous volumes of highly dense, irregularly distributed, accurate geo-referenced point cloud data in the format of three-dimensional (3D) point clouds. Today, more and more commercial MLS systems are available for transportation applications. However, many transportation engineers have neither interest in the 3D point cloud data nor know how to transform such data into their computer-aided model (CAD) formatted geometric road information. Therefore, automated methods and software tools for rapid and accurate extraction of 2D/3D road information from the MLS data are urgently needed. This doctoral dissertation deals with the development and implementation aspects of a novel strategy for the automated extraction of road information from the MLS data. The main features of this strategy include: (1) the extraction of road surfaces from large volumes of MLS point clouds, (2) the generation of 2D geo-referenced feature (GRF) images from the road-surface data, (3) the exploration of point density and intensity of MLS data for road-marking extraction, and (4) the extension of tensor voting (TV) for curvilinear pavement crack extraction. In accordance with this strategy, a RoadModeler prototype with three computerized algorithms was developed. They are: (1) road-surface extraction, (2) road-marking extraction, and (3) pavement-crack extraction. Four main contributions of this development can be summarized as follows. Firstly, a curb-based approach to road surface extraction with assistance of the vehicle’s trajectory is proposed and implemented. The vehicle’s trajectory and the function of curbs that separate road surfaces from sidewalks are used to efficiently separate road-surface points from large volume of MLS data. The accuracy of extracted road surfaces is validated with manually selected reference points. Secondly, the extracted road enables accurate detection of road markings and cracks for transportation-related applications in road traffic safety. To further improve computational efficiency, the extracted 3D road data are converted into 2D image data, termed as a GRF image. The GRF image of the extracted road enables an automated road-marking extraction algorithm and an automated crack detection algorithm, respectively. Thirdly, the automated road-marking extraction algorithm applies a point-density-dependent, multi-thresholding segmentation to the GRF image to overcome unevenly distributed intensity caused by the scanning range, the incidence angle, and the surface characteristics of an illuminated object. The morphological operation is then implemented to deal with the presence of noise and incompleteness of the extracted road markings. Fourthly, the automated crack extraction algorithm applies an iterative tensor voting (ITV) algorithm to the GRF image for crack enhancement. The tensor voting, a perceptual organization method that is capable of extracting curvilinear structures from the noisy and corrupted background, is explored and extended into the field of crack detection. The successful development of three algorithms suggests that the RoadModeler strategy offers a solution to the automated extraction of road information from the MLS data. Recommendations are given for future research and development to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use

    Geometric data understanding : deriving case specific features

    Get PDF
    There exists a tradition using precise geometric modeling, where uncertainties in data can be considered noise. Another tradition relies on statistical nature of vast quantity of data, where geometric regularity is intrinsic to data and statistical models usually grasp this level only indirectly. This work focuses on point cloud data of natural resources and the silhouette recognition from video input as two real world examples of problems having geometric content which is intangible at the raw data presentation. This content could be discovered and modeled to some degree by such machine learning (ML) approaches like deep learning, but either a direct coverage of geometry in samples or addition of special geometry invariant layer is necessary. Geometric content is central when there is a need for direct observations of spatial variables, or one needs to gain a mapping to a geometrically consistent data representation, where e.g. outliers or noise can be easily discerned. In this thesis we consider transformation of original input data to a geometric feature space in two example problems. The first example is curvature of surfaces, which has met renewed interest since the introduction of ubiquitous point cloud data and the maturation of the discrete differential geometry. Curvature spectra can characterize a spatial sample rather well, and provide useful features for ML purposes. The second example involves projective methods used to video stereo-signal analysis in swimming analytics. The aim is to find meaningful local geometric representations for feature generation, which also facilitate additional analysis based on geometric understanding of the model. The features are associated directly to some geometric quantity, and this makes it easier to express the geometric constraints in a natural way, as shown in the thesis. Also, the visualization and further feature generation is much easier. Third, the approach provides sound baseline methods to more traditional ML approaches, e.g. neural network methods. Fourth, most of the ML methods can utilize the geometric features presented in this work as additional features.Geometriassa käytetään perinteisesti tarkkoja malleja, jolloin datassa esiintyvät epätarkkuudet edustavat melua. Toisessa perinteessä nojataan suuren datamäärän tilastolliseen luonteeseen, jolloin geometrinen säännönmukaisuus on datan sisäsyntyinen ominaisuus, joka hahmotetaan tilastollisilla malleilla ainoastaan epäsuorasti. Tämä työ keskittyy kahteen esimerkkiin: luonnonvaroja kuvaaviin pistepilviin ja videohahmontunnistukseen. Nämä ovat todellisia ongelmia, joissa geometrinen sisältö on tavoittamattomissa raakadatan tasolla. Tämä sisältö voitaisiin jossain määrin löytää ja mallintaa koneoppimisen keinoin, esim. syväoppimisen avulla, mutta joko geometria pitää kattaa suoraan näytteistämällä tai tarvitaan neuronien lisäkerros geometrisia invariansseja varten. Geometrinen sisältö on keskeinen, kun tarvitaan suoraa avaruudellisten suureiden havainnointia, tai kun tarvitaan kuvaus geometrisesti yhtenäiseen dataesitykseen, jossa poikkeavat näytteet tai melu voidaan helposti erottaa. Tässä työssä tarkastellaan datan muuntamista geometriseen piirreavaruuteen kahden esimerkkiohjelman suhteen. Ensimmäinen esimerkki on pintakaarevuus, joka on uudelleen virinneen kiinnostuksen kohde kaikkialle saatavissa olevan datan ja diskreetin geometrian kypsymisen takia. Kaarevuusspektrit voivat luonnehtia avaruudellista kohdetta melko hyvin ja tarjota koneoppimisessa hyödyllisiä piirteitä. Toinen esimerkki koskee projektiivisia menetelmiä käytettäessä stereovideosignaalia uinnin analytiikkaan. Tavoite on löytää merkityksellisiä paikallisen geometrian esityksiä, jotka samalla mahdollistavat muun geometrian ymmärrykseen perustuvan analyysin. Piirteet liittyvät suoraan johonkin geometriseen suureeseen, ja tämä helpottaa luonnollisella tavalla geometristen rajoitteiden käsittelyä, kuten väitöstyössä osoitetaan. Myös visualisointi ja lisäpiirteiden luonti muuttuu helpommaksi. Kolmanneksi, lähestymistapa suo selkeän vertailumenetelmän perinteisemmille koneoppimisen lähestymistavoille, esim. hermoverkkomenetelmille. Neljänneksi, useimmat koneoppimismenetelmät voivat hyödyntää tässä työssä esitettyjä geometrisia piirteitä lisäämällä ne muiden piirteiden joukkoon

    Calibration of full-waveform airborne laser scanning data for 3D object segmentation

    Get PDF
    Phd ThesisAirborne Laser Scanning (ALS) is a fully commercial technology, which has seen rapid uptake from the photogrammetry and remote sensing community to classify surface features and enhance automatic object recognition and extraction processes. 3D object segmentation is considered as one of the major research topics in the field of laser scanning for feature recognition and object extraction applications. The demand for automatic segmentation has significantly increased with the emergence of full-waveform (FWF) ALS, which potentially offers an unlimited number of return echoes. FWF has shown potential to improve available segmentation and classification techniques through exploiting the additional physical observables which are provided alongside the standard geometric information. However, use of the FWF additional information is not recommended without prior radiometric calibration, taking into consideration all the parameters affecting the backscattered energy. The main focus of this research is to calibrate the additional information from FWF to develop the potential of point clouds for segmentation algorithms. Echo amplitude normalisation as a function of local incidence angle was identified as a particularly critical aspect, and a novel echo amplitude normalisation approach, termed the Robust Surface Normal (RSN) method, has been developed. Following the radar equation, a comprehensive radiometric calibration routine is introduced to account for all variables affecting the backscattered laser signal. Thereafter, a segmentation algorithm is developed, which utilises the raw 3D point clouds to estimate the normal for individual echoes based on the RSN method. The segmentation criterion is selected as the normal vector augmented by the calibrated backscatter signals. The developed segmentation routine aims to fully integrate FWF data to improve feature recognition and 3D object segmentation applications. The routine was tested over various feature types from two datasets with different properties to assess its potential. The results are compared to those delivered through utilizing only geometric information, without the additional FWF radiometric information, to assess performance over existing methods. The results approved the potential of the FWF additional observables to improve segmentation algorithms. The new approach was validated against manual segmentation results, revealing a successful automatic implementation and achieving an accuracy of 82%

    Semi-automated Generation of High-accuracy Digital Terrain Models along Roads Using Mobile Laser Scanning Data

    Get PDF
    Transportation agencies in many countries require high-accuracy (2-20 cm) digital terrain models (DTMs) along roads for various transportation related applications. Compared to traditional ground surveys and aerial photogrammetry, mobile laser scanning (MLS) has great potential for rapid acquisition of high-density and high-accuracy three-dimensional (3D) point clouds covering roadways. Such MLS point clouds can be used to generate high-accuracy DTMs in a cost-effective fashion. However, the large-volume, mixed-density and irregular-distribution of MLS points, as well as the complexity of the roadway environment, make DTM generation a very challenging task. In addition, most available software packages were originally developed for handling airborne laser scanning (ALS) point clouds, which cannot be directly used to process MLS point clouds. Therefore, methods and software tools to automatically generate DTMs along roads are urgently needed for transportation users. This thesis presents an applicable workflow to generate DTM from MLS point clouds. The entire strategy of DTM generation was divided into two main parts: removing non-ground points and interpolating ground points into gridded DTMs. First, a voxel-based upward growing algorithm was developed to effectively and accurately remove non-ground points. Then through a comparative study on four interpolation algorithms, namely Inverse Distance Weighted (IDW), Nearest Neighbour, Linear, and Natural Neighbours interpolation algorithms, the IDW interpolation algorithm was finally used to generate gridded DTMs due to its higher accuracy and higher computational efficiency. The obtained results demonstrated that the voxel-based upward growing algorithm is suitable for areas without steep terrain features. The average overall accuracy, correctness, and completeness values of this algorithm were 0.975, 0.980, and 0.986, respectively. In some cases, the overall accuracy can exceed 0.990. The results demonstrated that the semi-automated DTM generation method developed in this thesis was able to create DTMs with a centimetre-level grid size and 10 cm vertical accuracy using the MLS point clouds

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Estimating landscape irrigated areas and potential water conservation at the rural-urban interface using remote sensing and GIS

    Get PDF
    Research goals were to analyze patterns of urban landscape water use, assess landscape water conservation potential, and identify locations with capacity to conserve. Methodological contributions involved acquiring airborne multispectral digital images over two urban cities which were processed, classified, and imported into a GIS environment where landscaped area were extracted and combined with property and water billing data and local evapotranspiration rates to calculate landscape irrigation applications exceeding estimated water needs. Additional analyses were conducted to compare classified aerial images to ground-measured landscaped areas, landscaped areas to total parcel size, water use on residential and commercial properties, and turf areas under tress when they were leafed out and bare. Results verified the accuracy and value of this approach for municipal water management, showed more commercial properties applied water in excess of estimated needs compared to residential ones, and that small percentages of users accounted for most of the excess irrigatio
    corecore