51 research outputs found

    Temperature-Vegetation-soil Moisture-Precipitation Drought Index (TVMPDI):21-year drought monitoring in Iran using satellite imagery within Google Earth Engine

    Get PDF
    Remote Sensing (RS) offers efficient tools for drought monitoring, especially in countries with a lack of reliable and consistent in-situ multi-temporal datasets. In this study, a novel RS-based Drought Index (RSDI) named Temperature-Vegetation-soil Moisture-Precipitation Drought Index (TVMPDI) was proposed. To the best of our knowledge, TVMPDI is the first RSDI using four different drought indicators in its formulation. TVMPDI was then validated and compared with six conventional RSDIs including VCI, TCI, VHI, TVDI, MPDI and TVMDI. To this end, precipitation and soil temperature in-situ data have been used. Different time scales of meteorological Standardized Precipitation Index (SPI) index have also been used for the validation of the RSDIs. TVMPDI was highly correlated with the monthly precipitation and soil temperature in-situ data at 0.76 and 0.81 values respectively. The correlation coefficients between the RSDIs and 3-month SPI ranged from 0.07 to 0.28, identifying the TVMPDI as the most suitable index for subsequent analyses. Since the proposed TVMPDI could considerably outperform the other selected RSDIs, all spatiotemporal drought monitoring analyses in Iran were conducted by TVMPDI over the past 21 years. In this study, different products of the Moderate Resolution Imaging Spectrometer (MODIS), Tropical Rainfall Measuring Mission (TRMM), and Global Precipitation Measurement (GPM) datasets containing 15,206 images were used on the Google Earth Engine (GEE) cloud computing platform. According to the results, Iran experienced the most severe drought in 2000 with a 0.715 TVMPDI value lasting for almost two years. Conversely, the TVMPDI showed a minimum value equal to 0.6781 in 2019 as the lowest annual drought level. The drought severity and trend in the 31 provinces of Iran have also been mapped. Consequently, various levels of decrease over the 21 years were found for different provinces, while Isfahan and Gilan were the only provinces showing an ascending drought trend (with a 0.004% and 0.002% trendline slope respectively). Khuzestan also faced a worrying drought prevalence that occurred in several years. In summary, this study provides updated information about drought trends in Iran using an advanced and efficient RSDI implemented in the cloud computing GEE platform. These results are beneficial for decision-makers and officials responsible for environmental sustainability, agriculture and the effects of climate change.</p

    A decision fusion method based on multiple support vector machine system for fusion of hyperspectral and LIDAR data

    Get PDF
    Fusion of remote sensing data from multiple sensors has been remarkably increased for classification. This is because, additional sources may provide more information, and fusion of different information can produce a better understanding of the observed site. In the field of data fusion, fusion of light detection and ranging (LIDAR) and optical remote sensing data for land cover classification has attracted more attention. This paper addressed the use of a decision fusion methodology for the combination of hyperspectral and LIDAR data in land cover classification. The proposed method applied a support vector machine (SVM)-based classifier fusion system for fusion of hyperspectral and LIDAR data in the decision level. First, feature spaces are extracted on LIDAR and hyperspectral data. Then, SVM classifiers are applied on each feature data. After producing multiple of classifiers, Naive Bayes as a classifier fusion method combines the results of SVM classifiers form two data sets. A co-registered hyperspectral and LIDAR data set from Houston, USA, was available to examine the effect of the proposed decision fusion methodology. Experimental results show that the proposed data fusion method improved the classification accuracy and kappa coefficient in comparison to the single data sets. The results revealed that the overall accuracies of SVM classification on hyperspectral and LIDAR data separately are 88% and 58% while our decision fusion methodology receive the accuracy up to 91%

    Spectral-spatial feature learning for hyperspectral imagery classification using deep stacked sparse autoencoder

    Get PDF
    Classification of hyperspectral remote sensing imagery is one of the most popular topics because of its intrinsic potential to gather spectral signatures of materials and provides distinct abilities to object detection and recognition. In the last decade, an enormous number of methods were suggested to classify hyperspectral remote sensing data using spectral features, though some are not using all information and lead to poor classification accuracy; on the other hand, the exploration of deep features is recently considered a lot and has turned into a Research hot spot in the geoscience and remote sensing research community to enhance classification accuracy. A deep learning architecture is proposed to classify hyperspectral remote sensing imagery by joint utilization of spectral-spatial information. A stacked sparse autoencoder provides unsupervised feature learning to extract high-level feature representations of joint spectral– spatial information; then, a soft classifier is employed to train high-level features and to fine-tune the deep learning architecture. Comparative experiments are performed on two widely used hyperspectral remote sensing data (Salinas and PaviaU) and a coarse resolution hyperspectral data in the long-wave infrared range. The obtained results indicate the superiority of the proposed spectral-spatial deep learning architecture against the conventional classification methods

    Fusion of ALS Point Cloud and Optical Imagery for 3D Reconstruction of Building's Roof

    Get PDF
    Three-dimensional building models are important in various applications such as disaster management and urban planning. In this paper a method based on fusion of LiDAR point cloud and aerial image data sources has been proposed. Firstly using 2D map, the point set relevant to each building separated from the overall LiDAR point cloud. In the next step, the mean shift clustering algorithm applied to the points of different buildings in the feature space. Finally the segmentation stage ended with the separation of parallel and coplanar segments. Then using the adjacency matrix, adjacent segments are intersected and inner vertices are determined. In the other space, the area of any building cropped in the image space and the mean shift algorithm applied to it. Then, the lines of roof’s outline edge extracted by the Hough transform algorithm and the points obtained from the intersection of these lines transformed to the ground space. Finally, by integration of structural points of intersected adjacent facets and the transformed points from image space, reconstruction performed. In order to evaluate the efficiency of proposed method, buildings with different shapes and different level of complexity selected and the results of the 3D model reconstruction evaluated. The results showed credible efficiency of method for different buildings

    A fuzzy decision making system for building damage map creation using high resolution satellite imagery

    Get PDF
    Recent studies have shown high resolution satellite imagery to be a powerful data source for post-earthquake damage assessment of buildings. Manual interpretation of these images, while being a reliable method for finding damaged buildings, is a subjective and time-consuming endeavor, rendering it unviable at times of emergency. The present research, proposes a new state-of-the-art method for automatic damage assessment of buildings using high resolution satellite imagery. In this method, at the first step a set of preprocessing algorithms are performed on the images. Then, extracting a candidate building from both pre- and post-event images, the intact roof part after an earthquake is found. Afterwards, by considering the shape and other structural properties of this roof part with its pre-event condition in a fuzzy inference system, the rate of damage for each candidate building is estimated. The results obtained from evaluation of this algorithm using QuickBird images of the December 2003 Bam, Iran, earthquake prove the ability of this method for post-earthquake damage assessment of buildings

    A Multiple SVM System for Classification of Hyperspectral Remote Sensing Data

    Get PDF
    With recent technological advances in remote sensing sensors and systems, very highdimensional hyperspectral data are available for a better discrimination among different complex landcover classes. However, the large number of spectral bands, but limited availability of training samples creates the problem of Hughes phenomenon or ‘curse of dimensionality’ in hyperspectral data sets. Moreover, these high numbers of bands are usually highly correlated. Because of these complexities of hyperspectral data, traditional classification strategies have often limited performance in classification of hyperspectral imagery. Referring to the limitation of single classifier in these situations, Multiple Classifier Systems (MCS) may have better performance than single classifier. This paper presents a new method for classification of hyperspectral data based on a band clustering strategy through a multiple Support Vector Machine system. The proposed method uses the band grouping process based on a modified mutual information strategy to split data into few band groups. After the band grouping step, the proposed algorithm aims at benefiting from the capabilities of SVM as classification method. So, the proposed approach applies SVM on each band group that is produced in a previous step. Finally, Naive Bayes (NB) as a classifier fusion method combines decisions of SVM classifiers. Experimental results on two common hyperspectral data sets show that the proposed method improves the classification accuracy in comparison with the standard SVM on entire bands of data and feature selection methods

    Pose Estimation of Unmanned Aerial Vehicles Based on a Vision-Aided Multi-Sensor Fusion

    Get PDF
    GNSS/IMU navigation systems offer low-cost and robust solution to navigate UAVs. Since redundant measurements greatly improve the reliability of navigation systems, extensive researches have been made to enhance the efficiency and robustness of GNSS/IMU by additional sensors. This paper presents a method for integrating reference data, images taken from UAVs, barometric height data and GNSS/IMU data to estimate accurate and reliable pose parameters of UAVs. We provide improved pose estimations by integrating multi-sensor observations in an EKF algorithm with IMU motion model. The implemented methodology has demonstrated to be very efficient and reliable for automatic pose estimation. The calculated position and attitude of the UAV especially when we removed the GNSS from the working cycle clearly indicate the ability of the purposed methodology

    Deep learning decision fusion for the classification of urban remote sensing data

    Get PDF
    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral–spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers
    • …
    corecore