54 research outputs found

    ANALYSIS OF FULL-WAVEFORM LIDAR DATA FOR CLASSIFICATION OF URBAN AREAS

    Get PDF
    International audienceIn contrast to conventional airborne multi-echo laser scanner systems, full-waveform (FW) lidar systems are able to record the entire emitted and backscattered signal of each laser pulse. Instead of clouds of individual 3D points, FW devices provide connected 1D profiles of the 3D scene, which contain more detailed and additional information about the structure of the illuminated surfaces. This paper is focused on the analysis of FW data in urban areas. The problem of modelling FW lidar signals is first tackled. The standard method assumes the waveform to be the superposition of signal contributions of each scattering object in such a laser beam, which are approximated by Gaussian distributions. This model is suitable in many cases, especially in vegetated terrain. However, since it is not tailored to urban waveforms, the generalized Gaussian model is selected instead here. Then, a pattern recognition method for urban area classification is proposed. A supervised method using Support Vector Machines is performed on the FW point cloud based on the parameters extracted from the post-processing step. Results show that it is possible to partition urban areas in building, vegetation, natural ground and artificial ground regions with high accuracy using only lidar waveforms

    LIDAR WAVEFORM MODELING USING A MARKED POINT PROCESS

    Get PDF
    International audienceLidar waveforms are 1D signal consisting of a train of echoes where each of them correspond to a scattering target of the Earth surface. Modeling these echoes with the appropriate parametric function is necessary to retrieve physical information about these objects and characterize their properties. This paper presents a marked point process based model to reconstruct a lidar signal in terms of a set of parametric functions. The model takes into account both a data term which measures the coherence between the models and the waveforms, and a regularizing term which introduces physical knowledge on the reconstructed signal. We search for the best configuration of functions by performing a Reversible Jump Markov Chain Monte Carlo sampler coupled with a simulated annealing. Results are finally presented on different kinds of signals in urban areas

    A Marked Point Process for Modeling Lidar Waveforms

    Get PDF
    International audienceLidar waveforms are 1-D signals representing a train of echoes caused by reflections at different targets. Modeling these echoes with the appropriate parametric function is useful to retrieve information about the physical characteristics of the targets. This paper presents a new probabilistic model based upon a marked point process which reconstructs the echoes from recorded discrete waveforms as a sequence of parametric curves. Such an approach allows to fit each mode of a waveform with the most suitable function and to deal with both, symmetric and asymmetric, echoes. The model takes into account a data term, which measures the coherence between the models and the waveforms, and a regularization term, which introduces prior knowledge on the reconstructed signal. The exploration of the associated configuration space is performed by a reversible jump Markov chain Monte Carlo (RJMCMC) sampler coupled with simulated annealing. Experiments with different kinds of lidar signals, especially from urban scenes, show the high potential of the proposed approach. To further demonstrate the advantages of the suggested method, actual laser scans are classified and the results are reported

    Remote Sensing / An object-based semantic classification method for high resolution remote sensing imagery using ontology

    Get PDF
    Geographic Object-Based Image Analysis (GEOBIA) techniques have become increasingly popular in remote sensing. GEOBIA has been claimed to represent a paradigm shift in remote sensing interpretation. Still, GEOBIAsimilar to other emerging paradigmslacks formal expressions and objective modelling structures and in particular semantic classification methods using ontologies. This study has put forward an object-based semantic classification method for high resolution satellite imagery using an ontology that aims to fully exploit the advantages of ontology to GEOBIA. A three-step workflow has been introduced: ontology modelling, initial classification based on a data-driven machine learning method, and semantic classification based on knowledge-driven semantic rules. The classification part is based on data-driven machine learning, segmentation, feature selection, sample collection and an initial classification. Then, image objects are re-classified based on the ontological model whereby the semantic relations are expressed in the formal languages OWL and SWRL. The results show that the method with ontologyas compared to the decision tree classification without using the ontologyyielded minor statistical improvements in terms of accuracy for this particular image. However, this framework enhances existing GEOBIA methodologies: ontologies express and organize the whole structure of GEOBIA and allow establishing relations, particularly spatially explicit relations between objects as well as multi-scale/hierarchical relations.(VLID)219563

    Uwe Stilla GENERATION OF 3D-CITY MODELS AND THEIR UTILISATION IN IMAGE SEQUENCES

    No full text
    In this paper we describe the construction of a city model and its support for the analysis of image sequences. Our city model consists of building models which were generated from large scale vector maps and laser altimeter data. First the vector map is analysed to group the outlines of buildings and to obtain a hierarchical description of buildings or building complexes. The base area of single buildings are used to mask the corresponding elevation data. Depending on the task prismatic or polyhedral object models are reconstructed from the masked elevation data. The interpretation of image sequences taken by an airborne sensor in oblique view can be supported by a 3D city model. Possible GIS applications could be automated overlaying of selected buildings or query detailed building information from a database by interactively pointing on a frame in a sequence. The projection parameters of the model data are derived from GPS and INS. In practice the projected building contours do not exactly coincide with their image location. To overcome this problem an automated matching approach of image and model description is required. After image and vector map analysis correspondences between both scene descriptions can be found. These correspondences are used to correct the navigation data. This approach can easily be extended to image sequences. The corrected navigation data of a frame can be used as prediction for subsequent frames.

    structures in high-resolution SAR data

    No full text
    Perceptual grouping for automatic detection of man-mad

    Individual Tree Detection in Urban ALS Point Clouds with 3D Convolutional Networks

    No full text
    Since trees are a vital part of urban green infrastructure, automatic mapping of individual urban trees is becoming increasingly important for city management and planning. Although deep-learning-based object detection networks are the state-of-the-art in computer vision, their adaptation to individual tree detection in urban areas has scarcely been studied. Some existing works have employed 2D object detection networks for this purpose. However, these have used three-dimensional information only in the form of projected feature maps. In contrast, we exploited the full 3D potential of airborne laser scanning (ALS) point clouds by using a 3D neural network for individual tree detection. Specifically, a sparse convolutional network was used for 3D feature extraction, feeding both semantic segmentation and circular object detection outputs, which were combined for further increased accuracy. We demonstrate the capability of our approach on an urban topographic ALS point cloud with 10,864 hand-labeled ground truth trees. Our method achieved an average precision of 83% regarding the common 0.5 intersection over union criterion. 85% percent of the stems were found correctly with a precision of 88%, while tree area was covered by the individual tree detections with an F1 accuracy of 92%. Thereby, we outperformed traditional delineation baselines and recent detection networks
    • …
    corecore