20 research outputs found

    Building extraction for 3D city modelling using airborne laser scanning data and high-resolution aerial photo

    Get PDF
    Light detection and ranging (LiDAR) technology has become a standard tool for three-dimensional mapping because it offers fast rate of data acquisition with unprecedented level of accuracy. This study presents an approach to accurately extract and model building in three-dimensional space from airborne laser scanning data acquired over Universiti Putra Malaysia in 2015. First, the point cloud was classified into ground and non-ground xyz points. The ground points was used to generate digital terrain model (DTM) while digital surface model (DSM) was  produced from the entire point cloud. From DSM and DTM, we obtained normalise DSM (nDSM) representing the height of features above the terrain surface.  Thereafter, the DSM, DTM, nDSM, laser intensity image and orthophoto were  combined as a single data file by layer stacking. After integrating the data, it was segmented into image objects using Object Based Image Analysis (OBIA) and subsequently, the resulting image object classified into four land cover classes: building, road, waterbody and pavement. Assessment of the classification accuracy produced overall accuracy and Kappa coefficient of 94.02% and 0.88 respectively. Then the extracted building footprints from the building class were further processed to generate 3D model. The model provides 3D visual perception of the spatial pattern of the buildings which is useful for simulating disaster scenario for  emergency management

    Assessing Accuracy of the Vertical Component of Airborne Laser Scanner for 3D Urban Infrastructural Mapping.

    Get PDF
    This study presents two methods used to measure the accuracy of the height component of Airborne Laser Scanning (ALS) data.The objectives are: to assess the accuracy of LiDAR data, to find correlation between the actual and sensor recorded height, and to explore the effectiveness of linear regression model for accuracy assessment. Field observation was carried out with Total Station as reference data and the corresponding data obtained from normalized digital surface model (n-DSM). First, statistical method was used to obtained a Root Mean Square Error (RMSE) value of 0.607 and linear accuracy of 1.18948 at 95% confidence level. Similarly, linear regression function was used to obtained RMSE value of 0.5073 and linear accuracy of 1.10999. The study shows that ALS recorded height is reliable for 3D urban mapping. A resulting correlation coefficient of 0.9919 indicates a very good agreement between the sensor recorded height and the actual height of the object (R2= 0.9839; p less than 2.2e-16). The study indicates that linear regression model is effective for assessing the accuracy of ASL data

    Fusion of airborne LiDAR with multispectral SPOT 5 image for enhancement of feature extraction using Dempster–Shafer theory

    Get PDF
    This paper presents an application of data-driven Dempster-Shafer theory (DST) of evidence to fuse multisensor data for land-cover feature extraction. Over the years, researchers have focused on DST for a variety of applications. However, less attention has been given to generate and interpret probability, certainty, and conflict maps. Moreover, quantitative assessment of DST performance is often overlooked. In this paper, for implementation of DST, two main types of data were used: multisensor data such as Light Detection and Ranging (LiDAR) and multispectral satellite imagery [Satellite Pour l'Observation de la Terre 5 (SPOT 5)]. The objectives are to classify land-cover types from fused multisensor data using DST, to quantitatively assess the accuracy of the classification, and to examine the potential of slope data derived from LiDAR for feature detection. First, we derived the normalized difference vegetation index (NDVI) from SPOT 5 image and the normalized digital surface model (DSM) (nDSM) from LiDAR by subtracting the digital terrain model from the DSM. The two products were fused using the DST algorithm, and the accuracy of the classification was assessed. Second, we generated a surface slope from LiDAR and fused it with NDVI. Subsequently, the classification accuracy was assessed using an IKONOS image of the study area as ground truth data. From the two processing stages, the NDVI/nDSM fusion had an overall accuracy of 88.7%, while the NDVI/slope fusion had 75.3%. The result indicates that NDVI/nDSM integration performed better than NDVI/slope. Although the overall accuracy of the former is better than the latter (NDVI/slope), the contribution of individual class reveals that building extraction from fused slope and NDVI performed poorly. This study proves that DST is a time- and cost-effective method for accurate land-cover feature identification and extraction without the need for a prior knowledge of the scene. Furthermore, the ability to generate ot- er products like certainty, conflict, and maximum probability maps for better visual understanding of the decision process makes it more reliable for applications such as urban planning, forest management, 3-D feature extraction, and map updating

    Advanced differential interferometry synthetic aperture radar techniques for deformation monitoring: a review on sensors and recent research development

    Get PDF
    This paper reviews the advanced differential interferometry synthetic aperture radar (A-DInSAR) techniques, with two major components in focus. First is the basic concepts, synthetic aperture radar (SAR) data sources and the different algorithms documented in the literature, primarily focusing on persistent scatterers. In the second part, the techniques are compared in order to establish more linkage in terms of the variability of their applications, strength and validation of the interpreted results. Also, current issues in sensor and algorithm development are discussed. The study identified six existing A-DInSAR algorithms used for monitoring various deformation types. Generally, reports of their performance indicate that all the techniques are capable of measuring deformation phenomena at varying spatial resolution with high level of accuracy. However, their usability in suburban and vegetated areas yields poor results, compared to urbanized areas, due to inadequate permanent features that could provide sufficient coherent point targets. Meanwhile, there is continuous development in sensors and algorithms to expand the applicability domain of the technology for a wide range of deformable surfaces and displacement patterns with higher precision. On the sensor side, most of the latest SAR sensors employ longer wavelength (X and P bands) to increase the penetrating power of the signal and two other sensors (ALOS-2 PALSA-2 and SENTINEL-1) are scheduled to be launched in 2013. Researchers are investigating the possibility of using single-pass sensors with different look angles for SAR data collection. With these, it is expected that more data will be available for various applications. Algorithms such as corner reflector interferometry SAR, along track interferometry, liqui-InSAR, and squeeSAR are emerging to increase reliable estimation of deformation from different surfaces

    Automatic keypoints extraction from UAV image with refine and improved scale invariant features transform (RI-SIFT)

    Get PDF
    In this study, the performance of Refine and Improved Scale Invariant Features Transform (RI-SIFT) recently developed and patented to automatically extract key points from UAV images was examined. First the RI- SIFT algorithm was used to detect and extract CPs from two overlapping UAV images. To evaluate the performance of RI-SIFT, the original SIFT which employs nearest neighbour (NN) algorithms was used to extract keypoints from the same adjacent UA V images. Finally, the quality of the points extracted with RI- SIFT was evaluated by feeding them into polynomial, adjust, and spline transform mosaicing algorithms to stitch the images. The result indicates that RI-SIFT performed better than SIFT and NN with 271, 1415, and 1557points extracted respectively. Also, spline transform gives the most accurate mosaicked image with subpixel RMSE value of 1.0925 pixels equivalent to 0.10051m, followed by adjust transform with root mean square error (RSME) value of 1.956821 pixel (0.17611m) while polynomial transform produced the least accuracy result

    Maximizing urban features extraction from multi-sensor data with Dempster-Shafer theory and HSI data fusion techniques

    Get PDF
    This paper compares two multi-sensor data fusion techniques – Dempster-Sharfer Theory (DST) and Hue Saturation Intensity (HSI). The objective is to evaluate the effectiveness of the methods interm in space and time and quality of information extraction. LiDAR and hyperspectral data were fused using the two methods to extract urban land scape features. First, digital surface model (DSM), LiDAR intensity and hyperspectral image were fused with HSI. Then the result was classified into five classes (metal roof building, non-metal roof building, tree, grass and road) using supervised classification (minimum distance) and the classification accuracy assessment was done. Second, Dempster Shafer Theory (DST) utilized the evidences available to fuse normalized DSM, LiDAR intensity and hyperspectral derivatives to classify the surface materials into five classes as before. It was found out that DST perform well in the ability to discriminate different classes without expert information from the scene. Overal accuracy of 87% achieved using DST. While in HSI technique, the overal accuracy obtained was 74.3%. Also, metal and non-metal roof types were clearly classified with DST which, does not have a good result with HSI. A fundamental setback of HSI is its limitation to fusion of only two sensor data at a time whereas we could integrate different sensor data with DST. Besides, the time required to select trainimg site for supervised classificition, the accuracy of feature classification with HSI fused data is dependent on the knowledge of the analyst about the scene with the other one. This study shows DST to be an accurate and fast method to extract urban features and roof types. It is hoped that the increasing number of remote sensing technology transforming to era of redundant data will make DST a desired technique available in most commercial image processing software packages

    Landslide susceptibility mapping: machine and ensemble learning based on remote sensing big data

    Get PDF
    Predicting landslide occurrences can be difficult. However, failure to do so can be catastrophic, causing unwanted tragedies such as property damage, community displacement, and human casualties. Research into landslide susceptibility mapping (LSM) attempts to alleviate such catastrophes through the identification of landslide prone areas. Computational modelling techniques have been successful in related disaster scenarios, which motivate this work to explore such modelling for LSM. In this research, the potential of supervised machine learning and ensemble learning is investigated. Firstly, the Flexible Discriminant Analysis (FDA) supervised learning algorithm is trained for LSM and compared against other algorithms that have been widely used for the same purpose, namely Generalized Logistic Models (GLM), Boosted Regression Trees (BRT or GBM), and Random Forest (RF). Next, an ensemble model consisting of all four algorithms is implemented to examine possible performance improvements. The dataset used to train and test all the algorithms consists of a landslide inventory map of 227 landslide locations. From these sources, 13 conditioning factors are extracted to be used in the models. Experimental evaluations are made based on True Skill Statistic (TSS), the Receiver Operation characteristic (ROC) curve and kappa index. The results show that the best TSS (0.6986), ROC (0.904) and kappa (0.6915) were obtained by the ensemble model. FDA on its own seems effective at modelling landslide susceptibility from multiple data sources, with performance comparable to GLM. However, it slightly underperforms when compared to GBM (BRT) and RF. RF seems most capable compared to GBM, GLM, and FDA, when dealing with all conditioning factors

    Comparison of Clinico-Pathological Features Between Epithelial Ovarian Cancer Patients With and Without Endometriosis: A Cross-Sectional Study

    Get PDF
    Objectives: Women with endometriosis have a high risk of developing ovarian carcinoma that may occur due to endometriosis lesions. There is few research have so far focused on the clinical factors in patients with endometriosis-associated ovarian cancer (EAOC). Accordingly, this study aimed at comparing the demographic and obstetric characteristics between ovarian cancer with and without endometriosis Materials and Methods: This cross-sectional study was conducted on 20 EAOC patients and 140 non-EAOC individuals who had gone under surgery from 2011-17 at Al-Zahra hospital. Clinico-pathological characteristics of the two groups including first group only had malignant epithelial ovarian tumor (non-EAOC) and second group had both malignant epithelial ovarian tumor and endometriosis (EAOC). P value less than 0.05 was considered statistically significant. Results: EAOC cases were significantly younger (P=0.002) and had lower number of pregnancy (P=0.002), parity (P=0.004), and term pregnancy (P=0.005) than non-EAOC patients. A large proportion of EAOC cases had clear cell and endometrioid histopathology in comparison to non-EAOC individuals (P<0.001) and most of the tumors in these cases were unilateral (P=0.01). Conclusions: We found that age, parity, gravidity, and term pregnancy as well as laterality and histopathologic type of epithelial ovarian cancers vary in EAOC and non-EAOC individuals. Further research is required to identify these differences

    Imaging spectroscopy and light detection and ranging data fusion for urban features extraction

    No full text
    This study presents our findings on the fusion of Imaging Spectroscopy (IS) and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF) transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF) to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm) and the accuracy of the classification assessed. Digital Surface Model (DSM) and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI) fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data) were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988

    Fire-net: a deep learning framework for active forest fire detection

    No full text
    Forest conservation is crucial for the maintenance of a healthy and thriving ecosystem. The field of remote sensing (RS) has been integral with the wide adoption of computer vision and sensor technologies for forest land observation. One critical area of interest is the detection of active forest fires. A forest fire, which occurs naturally or manually induced, can quickly sweep through vast amounts of land, leaving behind unfathomable damage and loss of lives. Automatic detection of active forest fires (and burning biomass) is hence an important area to pursue to avoid unwanted catastrophes. Early fire detection can also be useful for decision makers to plan mitigation strategies as well as extinguishing efforts. In this paper, we present a deep learning framework called Fire-Net, that is trained on Landsat-8 imagery for the detection of active fires and burning biomass. Specifically, we fuse the optical (Red, Green, and Blue) and thermal modalities from the images for a more effective representation. In addition, our network leverages the residual convolution and separable convolution blocks, enabling deeper features from coarse datasets to be extracted. Experimental results show an overall accuracy of 97.35%, while also being able to robustly detect small active fires. The imagery for this study is taken from Australian and North American forests regions, the Amazon rainforest, Central Africa and Chernobyl (Ukraine), where forest fires are actively reported
    corecore