34 research outputs found

    Building extraction for 3D city modelling using airborne laser scanning data and high-resolution aerial photo

    Get PDF
    Light detection and ranging (LiDAR) technology has become a standard tool for three-dimensional mapping because it offers fast rate of data acquisition with unprecedented level of accuracy. This study presents an approach to accurately extract and model building in three-dimensional space from airborne laser scanning data acquired over Universiti Putra Malaysia in 2015. First, the point cloud was classified into ground and non-ground xyz points. The ground points was used to generate digital terrain model (DTM) while digital surface model (DSM) was  produced from the entire point cloud. From DSM and DTM, we obtained normalise DSM (nDSM) representing the height of features above the terrain surface.  Thereafter, the DSM, DTM, nDSM, laser intensity image and orthophoto were  combined as a single data file by layer stacking. After integrating the data, it was segmented into image objects using Object Based Image Analysis (OBIA) and subsequently, the resulting image object classified into four land cover classes: building, road, waterbody and pavement. Assessment of the classification accuracy produced overall accuracy and Kappa coefficient of 94.02% and 0.88 respectively. Then the extracted building footprints from the building class were further processed to generate 3D model. The model provides 3D visual perception of the spatial pattern of the buildings which is useful for simulating disaster scenario for  emergency management

    Improving tree species classification using UAS multispectral images and texture measures

    Get PDF
    This paper focuses on the use of ultra-high resolution Unmanned Aircraft Systems (UAS) imagery to classify tree species. Multispectral surveys were performed on a plant nursery to produce Digital Surface Models and orthophotos with ground sample distance equal to 0.01 m. Different combinations of multispectral images, multi-temporal data, and texture measures were employed to improve classification. The Grey Level Co-occurrence Matrix was used to generate texture images with different window sizes and procedures for optimal texture features and window size selection were investigated. The study evaluates how methods used in Remote Sensing could be applied on ultra-high resolution UAS images. Combinations of original and derived bands were classified with the Maximum Likelihood algorithm, and Principal Component Analysis was conducted in order to understand the correlation between bands. The study proves that the use of texture features produces a significant increase of the Overall Accuracy, whose values change from 58% to 78% or 87%, depending on components reduction. The improvement given by the introduction of texture measures is highlighted even in terms of User's and Producer's Accuracy. For classification purposes, the inclusion of texture can compensate for difficulties of performing multi-temporal surveys

    An automatic building extraction and regularisation technique using LiDAR point cloud data and orthoimage

    Get PDF
    The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors

    DSM and DTM generation from VHR satellite stereo imagery over plastic covered greenhouse areas

    Get PDF
    Agriculture under Plastic Covered Greenhouses (PCG) has represented a step forward in the evolution from traditional to industrial farming. However, PCG-based agricultural model has been also criticized for its associated environmental impact such as plastic waste, visual impact, soil pollution, biodiversity degradation and local runoff alteration. In this sense, timely and effective PCG mapping is the only way to help policy-makers in the definition of plans dealing with the trade-off between farmers’ profit and environmental impact for the remaining inhabitants. This work proposes a methodological pipeline for producing high added value 3D geospatial products (Digital Surface Models (DSM) and Digital Terrain Models (DTM)) from VHR satellite imagery over PCG areas. The 3D information layer provided through the devised approach could be very valuable as a complement to the traditional 2D spectral information offered by VHR satellite imagery to improve PCG mapping over large areas. This methodological approach has been tested in Almeria (Southern Spain) from a WorldView-2 VHR satellite stereo-pair. Once grid spacing format DSM and DTM were built, their vertical accuracy was assessed by means of lidar data provided by the Spanish Government (PNOA Programme). Regarding DSM completeness results, the image matching method based on hierarchical semi-global matching yielded much better scores (98.87%) than the traditional image matching method based on area-based matching and cross-correlation threshold (86.65%) when they were tested on the study area with the highest concentration of PCG (around 85.65% of PCG land cover). However, both image matching methods yielded similar vertical accuracy results in relation to the finally interpolated DSM, with mean errors ranging from 0.01 to 0.35m and random errors (standard deviation) between 0.56 and 0.82 m. The DTM error figures also showed no significant differences between both image matching methods, although being highly dependent on DSM-to- DTM filtering error, in turn closely related to greenhouse density and terrain complexity

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor
    corecore