29 research outputs found

    Photogrammetric evaluation of space linear array imagery for medium scale topographic mapping

    Get PDF
    This thesis is concerned with the 2D and 3D mathematical modelling of satellite-based linear array stereo images and the implementation of this modelling in a general adjustment program for use in sophisticated analytically-based photogrammetric systems. The programs have also been used to evaluate the geometric potential of linear array images in different configurations for medium scale topographic mapping. In addition, an analysis of the information content that can be extracted for topographic mapping purposes has been undertaken. The main aspects covered within this thesis are: - 2D mathematical modelling of space linear array images; - 3D mathematical modelling of the geometry of cross-track and along-track stereo linear array images taken from spacebome platforms; - the algorithms developed for use in the general adjustment program which implements the 2D and 3D modelling; - geometric accuracy tests of space linear array images conducted over high-accuracy test fields in different environments; - evaluation of the geometric capability and information content of space linear array images for medium scale topographic mapping; This thesis concludes that the mathematical modelling of the geometry and the adjustment program developed during the research has the capability to handle the images acquired from all available types of space linear array imaging systems. Furthermore it has been developed to handle the image data from the forthcoming very high-resolution space imaging systems utilizing flexible pointing of their linear array sensors. It also concludes that cross-track and along-track stereo images such as those acquired by the SPOT and MOMS- 02 linear array sensors have the capability for map compilation in 1:50,000 scales and smaller, but only in conjunction with a comprehensive field completion survey to supplement the data acquired from the satellite imagery

    Using pixel-based and object-based methods to classify urban hyperspectral features

    Get PDF
    Object-based image analysis methods have been developed recently. They have since become a very active research topic in the remote sensing community. This is mainly because the researchers have begun to study the spatial structures within the data. In contrast, pixel-based methods only use the spectral content of data. To evaluate the applicability of object-based image analysis methods for land-cover information extraction from hyperspectral data, a comprehensive comparative analysis was performed. In this study, six supervised classification methods were selected from pixel-based category, including the maximum likelihood (ML), fisher linear likelihood (FLL), support vector machine (SVM), binary encoding (BE), spectral angle mapper (SAM) and spectral information divergence (SID). The classifiers were conducted on several features extracted from original spectral bands in order to avoid the problem of the Hughes phenomenon, and obtain a sufficient number of training samples. Three supervised and four unsupervised feature extraction methods were used. Pixel based classification was conducted in the first step of the proposed algorithm. The effective feature number (EFN) was then obtained. Image objects were thereafter created using the fractal net evolution approach (FNEA), the segmentation method implemented in eCognition software. Several experiments have been carried out to find the best segmentation parameters. The classification accuracy of these objects was compared with the accuracy of the pixel-based methods. In these experiments, the Pavia University Campus hyperspectral dataset was used. This dataset was collected by the ROSIS sensor over an urban area in Italy. The results reveal that when using any combination of feature extraction and classification methods, the performance of object-based methods was better than pixel-based ones. Furthermore the statistical analysis of results shows that on average, there is almost an 8 percent improvement in classification accuracy when we use the object-based methods

    Window Detection from UAS-Derived Photogrammetric Point Cloud Employing Density-Based Filtering and Perceptual Organization

    Get PDF
    Point clouds with ever-increasing volume are regular data in 3D city modelling, in which building reconstruction is a significant part. The photogrammetric point cloud, generated from UAS (Unmanned Aerial System) imagery, is a novel type of data in building reconstruction. Its positive characteristics, alongside its challenging qualities, provoke discussions on this theme of research. In this paper, patch-wise detection of the points of window frames on facades and roofs are undertaken using this kind of data. A density-based multi-scale filter is devised in the feature space of normal vectors to globally handle the matter of high volume of data and to detect edges. Color information is employed for the downsized data to remove the inner clutter of the building. Perceptual organization directs the approach via grouping and the Gestalt principles, to segment the filtered point cloud and to later detect window patches. The evaluation of the approach displays a completeness of 95% and 92%, respectively, as well as a correctness of 95% and 96%, respectively, for the detection of rectangular and partially curved window frames in two big heterogeneous cluttered datasets. Moreover, most intrusions and protrusions cannot mislead the window detection approach. Several doors with glass parts and a number of parallel parts of the scaffolding are mistaken as windows when using the large-scale object detection approach due to their similar patterns with window frames. Sensitivity analysis of the input parameters demonstrates that the filter functionality depends on the radius of density calculation in the feature space. Furthermore, successfully employing the Gestalt principles in the detection of window frames is influenced by the width determination of window partitioning

    Monitoring Long-Term Spatiotemporal Changes in Iran Surface Waters Using Landsat Imagery

    Get PDF
    Within water resources management, surface water area (SWA) variation plays a vital role in hydrological processes as well as in agriculture, environmental ecosystems, and ecological processes. The monitoring of long-term spatiotemporal SWA changes is even more critical within highly populated regions that have an arid or semi-arid climate, such as Iran. This paper examined variations in SWA in Iran from 1990 to 2021 using about 18,000 Landsat 5, 7, and 8 satellite images through the Google Earth Engine (GEE) cloud processing platform. To this end, the performance of twelve water mapping rules (WMRs) within remotely-sensed imagery was also evaluated. Our findings revealed that (1) methods which provide a higher separation (derived from transformed divergence (TD) and Jefferies–Matusita (JM) distances) between the two target classes (water and non-water) result in higher classification accuracy (overall accuracy (OA) and user accuracy (UA) of each class). (2) Near-infrared (NIR)-based WMRs are more accurate than short-wave infrared (SWIR)-based methods for arid regions. (3) The SWA in Iran has an overall downward trend (observed by linear regression (LR) and sequential Mann–Kendall (SQMK) tests). (4) Of the five major water basins, only the Persian Gulf Basin had an upward trend. (5) While temperature has trended upward, the precipitation and normalized difference vegetation index (NDVI), a measure of the country’s greenness, have experienced a downward trend. (6) Precipitation showed the highest correlation with changes in SWA (r = 0.69). (7) Long-term changes in SWA were highly correlated (r = 0.98) with variations in the JRC world water map

    Cloud detection based on high resolution stereo pairs of the geostationary meteosat images

    Get PDF
    Due to the considerable impact of clouds on the energy balance in the atmosphere and on the earth surface, they are of great importance for various applications in meteorology or remote sensing. An important aspect of the cloud research studies is the detection of cloudy pixels from the processing of satellite images. In this research, we investigated a stereographic method on a new set of Meteosat images, namely the combination of the high resolution visible (HRV) channel of the Meteosat-8 Indian Ocean Data Coverage (IODC) as a stereo pair with the HRV channel of the Meteosat Second Generation (MSG) Meteosat-10 image at 0° E. In addition, an approach based on the outputs from stereo analysis was proposed to detect cloudy pixels. This approach is introduced with a 2D-scatterplot based on the parallax value and the minimum intersection distance. The mentioned scatterplot was applied to determine/detect cloudy pixels in various image subsets with different amounts of cloud cover. Apart from the general advantage of the applied stereography method, which only depends on geometric relationships, the cloud detection results are also improved because: (1) The stereo pair is the HRV bands of the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) sensor, with the highest spatial resolution available from the Meteosat geostationary platform; and (2) the time difference between the image pairs is nearly 5 s, which improves the matching results and also decreases the effect of cloud movements. In order to prove this improvement, the results of this stereo-based approach were compared with three different reflectance-based target detection techniques, including the adaptive coherent estimator (ACE), constrained energy minimization (CEM), and matched filter (MF). The comparison of the receiver operating characteristics (ROC) detection curves and the area under these curves (AUC) showed better detection results with the proposed method. The AUC value was 0.79, 0.90, 0.90, and 0.93 respectively for ACE, CEM, MF, and the proposed stereo-based detection approach. The results of this research shall enable a more realistic modelling of down-welling solar irradiance in the future

    Enhanced algorithm based on persistent scatterer interferometry for the estimation of high-rate land subsidence

    Get PDF
    Persistent scatterer interferometry (PSI) techniques using amplitude analysis and considering a temporal deformation model for PS pixel selection are unable to identify PS pixels in rural areas lacking human-made structures. In contrast, high rates of land subsidence lead to significant phase-unwrapping errors in a recently developed PSI algorithm (StaMPS) that applies phase stability and amplitude analysis to select the PS pixels in rural areas. The objective of this paper is to present an enhanced algorithm based on PSI to estimate the deformation rate in rural areas undergoing high and nearly constant rates of deformation. The proposed approach integrates the strengths of all of the existing PSI algorithms in PS pixel selection and phase unwrapping. PS pixels are first selected based on the amplitude information and phase-stability estimation as performed in StaMPS. The phase-unwrapping step, including the deformation rate and phase-ambiguity estimation, is then performed using least-squares ambiguity decorrelation adjustment (LAMBDA). The atmospheric phase screen (APS) and nonlinear deformation contribution to the phase are estimated by applying a high-pass temporal filter to the residuals derived from the LAMBDA method. The final deformation rate and the ambiguity parameter are re-estimated after subtracting the APS and the nonlinear deformation from that of the initial phase. The proposed method is applied to 22 ENVISAT ASAR images of southwestern Tehran basin captured between 2003 and 2008. A quantitative comparison with the results obtained with leveling and GPS measurements demonstrates the significant improvement of the PSI technique

    Epipolar Resampling of Cross-Track Pushbroom Satellite Imagery Using the Rigorous Sensor Model

    No full text
    Epipolar resampling aims to eliminate the vertical parallax of stereo images. Due to the dynamic nature of the exterior orientation parameters of linear pushbroom satellite imagery and the complexity of reconstructing the epipolar geometry using rigorous sensor models, so far, no epipolar resampling approach has been proposed based on these models. In this paper for the first time it is shown that the orientation of the instantaneous baseline (IB) of conjugate image points (CIPs) in the linear pushbroom satellite imagery can be modeled with high precision in terms of the rows- and the columns-number of CIPs. Taking advantage of this feature, a novel approach is then presented for epipolar resampling of cross-track linear pushbroom satellite imagery. The proposed method is based on the rigorous sensor model. As the instantaneous position of sensors remains fixed, the digital elevation model of the area of interest is not required in the resampling process. Experimental results obtained from two pairs of SPOT and one pair of RapidEye stereo imagery with different terrain conditions shows that the proposed epipolar resampling approach benefits from a superior accuracy, as the remained vertical parallaxes of all CIPs in the normalized images are close to zero

    Large-Scale Accurate Reconstruction of Buildings Employing Point Clouds Generated from UAV Imagery

    Get PDF
    High-density point clouds are valuable and detailed sources of data for different processes related to photogrammetry. We explore the knowledge-based generation of accurate large-scale three-dimensional (3D) models of buildings employing point clouds derived from UAV-based photogrammetry. A new two-level segmentation approach based on efficient RANdom SAmple Consensus (RANSAC) shape detection is developed to segment potential facades and roofs of the buildings and extract their footprints. In the first level, the cylinder primitive is implemented to trim point clouds and split buildings, and the second level of the segmentation produces planar segments. The efficient RANSAC algorithm is enhanced in sizing up the segments via point-based analyses for both levels of segmentation. Then, planar modelling is carried out employing contextual knowledge through a new constrained least squares method. New evaluation criteria are proposed based on conceptual knowledge. They can examine the abilities of the approach in reconstruction of footprints, 3D models, and planar segments in addition to detection of over/under segmentation. Evaluation of the 3D models proves that the geometrical accuracy of LoD3 is achieved, since the average horizontal and vertical accuracy of the reconstructed vertices of roofs and footprints are better than (0.24, 0.23) m, (0.19, 0.17) m for the first dataset, and (0.35, 0.37) m, (0.28, 0.24) m for the second dataset

    Reducing the Effect of the Endmembers’ Spectral Variability by Selecting the Optimal Spectral Bands

    No full text
    Variable environmental conditions cause different spectral responses of scene endmembers. Ignoring these variations affects the accuracy of fractional abundances obtained from linear spectral unmixing. On the other hand, the correlation between the bands of hyperspectral data is not considered by conventional methods developed for dealing with spectral variability. In this paper, a novel approach is proposed to simultaneously mitigate spectral variability and reduce correlation among different endmembers in hyperspectral datasets. The idea of the proposed method is to utilize the angular discrepancy of bands in the Prototype Space (PS), which is constructed using the endmembers of the image. Using the concepts of PS, in which each band is treated as a space point, we proposed a method to identify independent bands according to their angles. The proposed method comprised two main steps. In the first step, which aims to alleviate the spectral variability issue, image bands are prioritized based on their standard deviations computed over some sets of endmembers. Independent bands are then recognized in the prototype space, employing the angles between the prioritized bands. Finally, the unmixing process is done using the selected bands. In addition, the paper presents a technique to form a spectral library of endmembers’ variability (sets of endmembers). The proposed method extracts endmembers sets directly from the image data via a modified version of unsupervised spatial–spectral preprocessing. The performance of the proposed method was evaluated by five simulated images and three real hyperspectral datasets. The experiments show that the proposed method—using both groups of spectral variability reduction methods and independent band selection methods—produces better results compared to the conventional methods of each group. The improvement in the performance of the proposed method is observed in terms of more appropriate bands being selected and more accurate fractional abundance values being estimated
    corecore