9 research outputs found

    Segmentation-Based Ground Points Detection from Mobile Laser Scanning Point Cloud

    Get PDF

    Segmentation-Based Filtering of Airborne LiDAR Point Clouds by Progressive Densification of Terrain Segments

    No full text
    Filtering is one of the core post-processing steps for Airborne Laser Scanning (ALS) point clouds. A segmentation-based filtering (SBF) method is proposed herein. This method is composed of three key steps: point cloud segmentation, multiple echoes analysis, and iterative judgment. Moreover, the third step is our main contribution. Particularly, the iterative judgment is based on the framework of the classic progressive TIN (triangular irregular network) densification (PTD) method, but with basic processing unit being a segment rather than a single point. Seven benchmark datasets provided by ISPRS Working Group III/3 are utilized to test the SBF algorithm and the classic PTD method. Experimental results suggest that, compared with the PTD method, the SBF approach is capable of preserving discontinuities of landscapes and removing the lower parts of large objects attached on the ground surface. As a result, the SBF approach is able to reduce omission errors and total errors by 18.26% and 11.47% respectively, which would significantly decrease the cost of manual operation required in post-processing

    La DĂ©tection des changements tridimensionnels Ă  l'aide de nuages de points : Une revue

    Full text link
    peer reviewedChange detection is an important step for the characterization of object dynamics at the earth’s surface. In multi-temporal point clouds, the main challenge is to detect true changes at different granularities in a scene subject to significant noise and occlusion. To better understand new research perspectives in this field, a deep review of recent advances in 3D change detection methods is needed. To this end, we present a comprehensive review of the state of the art of 3D change detection approaches, mainly those using 3D point clouds. We review standard methods and recent advances in the use of machine and deep learning for change detection. In addition, the paper presents a summary of 3D point cloud benchmark datasets from different sensors (aerial, mobile, and static), together with associated information. We also investigate representative evaluation metrics for this task. To finish, we present open questions and research perspectives. By reviewing the relevant papers in the field, we highlight the potential of bi- and multi-temporal point clouds for better monitoring analysis for various applications.11. Sustainable cities and communitie

    Extraction of Digital Terrain Models from Airborne Laser Scanning Data based on Transfer-Learning

    Get PDF
    With the rapid urbanization, timely and comprehensive urban thematic and topographic information is highly needed. Digital Terrain Models (DTMs), as one of unique urban topographic information, directly affect subsequent urban applications such as smart cities, urban microclimate studies, emergency and disaster management. Therefore, both the accuracy and resolution of DTMs define the quality of consequent tasks. Current workflows for DTM extraction vary in accuracy and resolution due to the complexity of terrain and off-terrain objects. Traditional filters, which rely on certain assumptions of surface morphology, insufficiently generalize complex terrain. Recent development in semantic labeling of point clouds has shed light on this problem. Under the semantic labeling context, DTM extraction can be viewed as a binary classification task. This study aims at developing a workflow for automated point-wise DTM extraction from Airborne Laser Scanning (ALS) point clouds using a transfer-learning approach on ResNet. The workflow consists of three parts: feature image generation, transfer learning using ResNet, and accuracy assessment. First, each point is transformed into a feature image based on its elevation differences with neighbouring points. Then, the feature images are classified into ground and non-ground using ResNet models. The ground points are extracted by remapping each feature image to its corresponding points. Lastly, the proposed workflow is compared with two traditional filters, namely the Progressive Morphological Filter (PMF) and the Progress TIN Densification (PTD). Results show that the proposed workflow establishes an advantageous accuracy of DTM extraction, which yields only 0.522% Type I error, 4.84% Type II error and 2.43% total error. In comparison, Type I, Type II and total error for PMF are 7.82%, 11.6%, and 9.48%, for PTD are 1.55%, 5.37%, and 3.22%, respectively. The root mean squared error of interpolated DTM of 1 m resolution is only 7.3 cm. Moreover, the use of pre-trained weights largely accelerated the training process and enabled the network to reach unprecedented accuracy even on a small amount of training set. Qualitative analysis is further conducted to investigate the reliability and limitations of the proposed workflow

    Geometric data understanding : deriving case specific features

    Get PDF
    There exists a tradition using precise geometric modeling, where uncertainties in data can be considered noise. Another tradition relies on statistical nature of vast quantity of data, where geometric regularity is intrinsic to data and statistical models usually grasp this level only indirectly. This work focuses on point cloud data of natural resources and the silhouette recognition from video input as two real world examples of problems having geometric content which is intangible at the raw data presentation. This content could be discovered and modeled to some degree by such machine learning (ML) approaches like deep learning, but either a direct coverage of geometry in samples or addition of special geometry invariant layer is necessary. Geometric content is central when there is a need for direct observations of spatial variables, or one needs to gain a mapping to a geometrically consistent data representation, where e.g. outliers or noise can be easily discerned. In this thesis we consider transformation of original input data to a geometric feature space in two example problems. The first example is curvature of surfaces, which has met renewed interest since the introduction of ubiquitous point cloud data and the maturation of the discrete differential geometry. Curvature spectra can characterize a spatial sample rather well, and provide useful features for ML purposes. The second example involves projective methods used to video stereo-signal analysis in swimming analytics. The aim is to find meaningful local geometric representations for feature generation, which also facilitate additional analysis based on geometric understanding of the model. The features are associated directly to some geometric quantity, and this makes it easier to express the geometric constraints in a natural way, as shown in the thesis. Also, the visualization and further feature generation is much easier. Third, the approach provides sound baseline methods to more traditional ML approaches, e.g. neural network methods. Fourth, most of the ML methods can utilize the geometric features presented in this work as additional features.Geometriassa käytetään perinteisesti tarkkoja malleja, jolloin datassa esiintyvät epätarkkuudet edustavat melua. Toisessa perinteessä nojataan suuren datamäärän tilastolliseen luonteeseen, jolloin geometrinen säännönmukaisuus on datan sisäsyntyinen ominaisuus, joka hahmotetaan tilastollisilla malleilla ainoastaan epäsuorasti. Tämä työ keskittyy kahteen esimerkkiin: luonnonvaroja kuvaaviin pistepilviin ja videohahmontunnistukseen. Nämä ovat todellisia ongelmia, joissa geometrinen sisältö on tavoittamattomissa raakadatan tasolla. Tämä sisältö voitaisiin jossain määrin löytää ja mallintaa koneoppimisen keinoin, esim. syväoppimisen avulla, mutta joko geometria pitää kattaa suoraan näytteistämällä tai tarvitaan neuronien lisäkerros geometrisia invariansseja varten. Geometrinen sisältö on keskeinen, kun tarvitaan suoraa avaruudellisten suureiden havainnointia, tai kun tarvitaan kuvaus geometrisesti yhtenäiseen dataesitykseen, jossa poikkeavat näytteet tai melu voidaan helposti erottaa. Tässä työssä tarkastellaan datan muuntamista geometriseen piirreavaruuteen kahden esimerkkiohjelman suhteen. Ensimmäinen esimerkki on pintakaarevuus, joka on uudelleen virinneen kiinnostuksen kohde kaikkialle saatavissa olevan datan ja diskreetin geometrian kypsymisen takia. Kaarevuusspektrit voivat luonnehtia avaruudellista kohdetta melko hyvin ja tarjota koneoppimisessa hyödyllisiä piirteitä. Toinen esimerkki koskee projektiivisia menetelmiä käytettäessä stereovideosignaalia uinnin analytiikkaan. Tavoite on löytää merkityksellisiä paikallisen geometrian esityksiä, jotka samalla mahdollistavat muun geometrian ymmärrykseen perustuvan analyysin. Piirteet liittyvät suoraan johonkin geometriseen suureeseen, ja tämä helpottaa luonnollisella tavalla geometristen rajoitteiden käsittelyä, kuten väitöstyössä osoitetaan. Myös visualisointi ja lisäpiirteiden luonti muuttuu helpommaksi. Kolmanneksi, lähestymistapa suo selkeän vertailumenetelmän perinteisemmille koneoppimisen lähestymistavoille, esim. hermoverkkomenetelmille. Neljänneksi, useimmat koneoppimismenetelmät voivat hyödyntää tässä työssä esitettyjä geometrisia piirteitä lisäämällä ne muiden piirteiden joukkoon

    Quantification théorique des effets du paramétrage du système d'acquisition sur les variables descriptives du nuage de points LiDAR

    Get PDF
    La cartographie de la ressource forestière se concrétise par la réalisation d’inventaires sur de vastes territoires grâce à des méthodes de mesure automatiques ou semi-automatiques à grandes échelles. En particulier, le développement du LiDAR (light detection and ranging) aéroporté a ouvert la voie à de nouvelles perspectives. Bien que le LiDAR aéroporté ait fait ses preuves comme outil d’inventaire et de cartographie, l’étude de la littérature scientifique sur le sujet met en évidence que les méthodes de traitement de l’information ont des limites et ne sont généralement valides que dans une région donnée et avec un système d’acquisition donné. En effet, un changement dans le dispositif d’acquisition entraîne des variations dans la structure du nuage de points acquis, rendant lesmodèles de cartographie de la ressource non généralisables. Dans le but de créer des modèles de cartographie de la ressource qui soient moins dépendants de la région d’étude et du dispositif d’acquisition utilisé pour les construire, il est nécessaire de comprendre d’où viennent ces variations et comment, à défaut de les éviter, les corriger. Nous explorons dans cette thèse comment des variations dans la configuration des systèmes d’acquisition de données peuvent engendrer des variations dans la structure des nuages de points. Ces questions sont traitées grâce à des modèles mathématiques théoriques simples et nous montrons, dans une certaine mesure, qu’il est possible de corriger les données de LiDAR aéroporté pour les normaliser afin de simuler une acquisition homogène réalisée avec un dispositif d’acquisition « standard » unique. Cette thèse aborde l’enjeu de proposer et d’initier, pour le futur, des méthodes de traitement de données reposant sur des standards mieux établis afin que les outils de cartographie de la ressource soient plus polyvalents et plus justes à grandes échellesThe mapping of the forest resource is currently achieved through inventories made across large territories using methods of automatic or semi-automatic measurements at broad scales. Notably, the development of airborne LiDAR (light detection and ranging) has opened the way for new perspectives in this context. Despite its proven suitability as a tool for inventories and mapping, the study of the scientific literature on airborne LiDAR shows that methods for processing the acquired information remain limited, and are usually valid only for a given region of interest and for a given acquisition device. Indeed, modifying the acquisition device generates variation in the structure of the point cloud that often restrict the range of application of resource evaluation models. With the aim of moving towards models for resourcemapping that are less dependent on the characteristics of both the study area and the of acquisition device, it is important to understand the source of such variation and how to correct it. We investigated, how variations in the settings of the data acquisition systems may generate some variation in the structure of the obtained point clouds. These questions were treated using simple theoretical and mathematical models and we showed, to a certain extent, that it is possible to correct the LiDAR data, and thus to normalise measurements to simulate homogeneous acquisitions with a “standard” and unique acquisition device. The challenge pursued in this thesis is to propose and initiate, for the future, data processing methods relying on better established standards in order to build more accurate and more versatile tools for the large-scalemapping of forest resources
    corecore