804 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Fused LISS IV Image Classification using Deep Convolution Neural Networks

    Get PDF
    These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively

    GIS-based urban land use characterization and population modeling with subpixel information measured from remote sensing data

    Get PDF
    This dissertation provides deeper understanding on the application of Vegetation-Impervious Surface-Soil (V-I-S) model in the urban land use characterization and population modeling, focusing on New Orleans area. Previous research on the V-I-S model used in urban land use classification emphasized on the accuracy improvement while ignoring the discussion of the stability of classifiers. I developed an evaluation framework by using randomization techniques and decision tree method to assess and compare the performance of classifiers and input features. The proposed evaluation framework is applied to demonstrate the superiority of V-I-S fractions and LST for urban land use classification. It could also be applied to the assessment of input features and classifiers for other remote sensing image classification context. An innovative urban land use classification based on the V-I-S model is implemented and tested in this dissertation. Due to the shape of the V-I-S bivariate histogram that resembles topological surfaces, a pattern that honors the Lu-Weng’s urban model, the V-I-S feature space is rasterized into grey-scale image and subsequently partitioned by marker-controlled watershed segmentation, leading to an urban land use classification. This new approach is proven to be insensitive to the selection of initial markers as long as they are positioned around the underlying watershed centers. This dissertation links the population distribution of New Orleans with its physiogeographic conditions indicated by the V-I-S sub-pixel composition and the land use information. It shows that the V-I-S fractions cannot be directly used to model the population distribution. Both the OLS and GWR models produced poor model fit. In contrast, the land use information extracted from the V-I-S information and LST significantly improved regression models. A three-class land use model is fitted adequately. The GWR model reveals the spatial nonstationarity as the relationship between the population distribution and the land use is relatively poor in the city center and becomes stronger towards the city fringe, depicting a classic urban concentric pattern. It highlighted that New Orleans is a complex metropolitan area, and its population distribution cannot be fully modeled with the physiogeographic measurements

    The detection of wetlands using remote sensing in Qoqodala, Eastern Cape

    Get PDF
    Bibliography: leaves 66-68.This dissertation aims to establish the possibilities of mapping wetlands in Qoqodala, Eastern Cape Province, South Africa, using Landsat and/or Aster imagery. The methodology for mapping wetlands using Landsat imagery, proposed by Thompson, Marneweck, Bell, Kotze, Muller, Cox and Smith (2002) is adapted and applied to the study area. The same methodology is modified for use with Aster imagery and applied to the study area. In addition, the possibilities of treating Aster as a hyperspectral image are investigated, and a methodology using hyperspectral processing techniques is implemented

    On the Use of Imaging Spectroscopy from Unmanned Aerial Systems (UAS) to Model Yield and Assess Growth Stages of a Broadacre Crop

    Get PDF
    Snap bean production was valued at $363 million in 2018. Moreover, the increasing need in food production, caused by the exponential increase in population, makes this crop vitally important to study. Traditionally, harvest time determination and yield prediction are performed by collecting limited number of samples. While this approach could work, it is inaccurate, labor-intensive, and based on a small sample size. The ambiguous nature of this approach furthermore leaves the grower with under-ripe and over-mature plants, decreasing the final net profit and the overall quality of the product. A more cost-effective method would be a site-specific approach that would save time and labor for farmers and growers, while providing them with exact detail to when and where to harvest and how much is to be harvested (while forecasting yield). In this study we used hyperspectral (i.e., point-based and image-based), as well as biophysical data, to identify spectral signatures and biophysical attributes that could schedule harvest and forecast yield prior to harvest. Over the past two decades, there have been immense advances in the field of yield and harvest modeling using remote sensing data. Nevertheless, there still exists a wide gap in the literature covering yield and harvest assessment as a function of time using both ground-based and unmanned aerial systems. There is a need for a study focusing on crop-specific yield and harvest assessment using a rapid, affordable system. We hypothesize that a down-sampled multispectral system, tuned with spectral features identified from hyperspectral data, could address the mentioned gaps. Moreover, we hypothesize that the airborne data will contain noise that could negatively impact the performance and the reliability of the utilized models. Thus, We address these knowledge gaps with three objectives as below: 1. Assess yield prediction of snap bean crop using spectral and biophysical data and identify discriminating spectral features via statistical and machine learning approaches. 2. Evaluate snap bean harvest maturity at both the plant growth stage and pod maturity level, by means of spectral and biophysical indicators, and identify the corresponding discriminating spectral features. 3. Assess the feasibility of using a deep learning architecture for reducing noise in the hyperspectral data. In the light of the mentioned objectives, we carried out a greenhouse study in the winter and spring of 2019, where we studied temporal change in spectra and physical attributes of snap-bean crop, from Huntington cultivar, using a handheld spectrometer in the visible- to shortwave-infrared domain (400-2500 nm). Chapter 3 of this dissertation focuses on yield assessment of the greenhouse study. Findings from this best-case scenario yield study showed that the best time to study yield is approximately 20-25 days prior to harvest that would give out the most accurate yield predictions. The proposed approach was able to explain variability as high as R2 = 0.72, with spectral features residing in absorption regions for chlorophyll, protein, lignin, and nitrogen, among others. The captured data from this study contained minimal noise, even in the detector fall-off regions. Moving the focus to harvest maturity assessment, Chapter 4 presents findings from this objective in the greenhouse environment. Our findings showed that four stages of maturity, namely vegetative growth, budding, flowering, and pod formation, are distinguishable with 79% and 78% accuracy, respectively, via the two introduced vegetation indices, as snap-bean growth index (SGI) and normalized difference snap-bean growth index (NDSI), respectively. Moreover, pod-level maturity classification showed that ready-to-harvest and not-ready-to-harvest pods can be separated with 78% accuracy with identified wavelengths residing in green, red edge, and shortwave-infrared regions. Moreover, Chapters 5 and 6 focus on transitioning the learned concepts from the mentioned greenhouse scenario to UAS domain. We transitioned from a handheld spectrometer in the visible to short-wave infrared domain (400-2500 nm) to a UAS-mounted hyperspectral imager in the visible-to-near-infrared region (400-1000 nm). Two years worth of data, at two different geographical locations, were collected in upstate New York and examined for yield modeling and harvest scheduling objectives. For analysis of the collected data, we introduced a feature selection library in Python, named “Jostar”, to identify the most discriminating wavelengths. The findings from the yield modeling UAS study show that pod weight and seed length, as two different yield indicators, can be explained with R2 as high as 0.93 and 0.98, respectively. Identified wavelengths resided in blue, green, red, and red edge regions, and 44-55 days after planting (DAP) showed to be the optimal time for yield assessment. Chapter 6, on the other hand, evaluates maturity assessment, in terms of pod classification, from the UAS perspective. Results from this study showed that the identified features resided in blue, green, red, and red-edge regions, contributing to F1 score as high as 0.91 for differentiating between ready-to-harvest vs. not ready-to-harvest. The identified features from this study is in line with those detected from the UAS yield assessment study. In order to have a parallel comparison of the greenhouse study against the UAS study, we adopted the methodology employed for UAS studies and applied it to the greenhouse studies, in Chapter 7. Since the greenhouse data were captured in the visible-to-shortwave-infrared (400-2500 nm) domain, and the UAS study data were captured in the VNIR (400-1000 nm) domain, we truncated the spectral range of the collected data from the greenhouse study to the VNIR domain. The comparison experiment between the greenhouse study and the UAS studies for yield assessment, at two harvest stages early and late, showed that spectral features in 450-470, 500-520, 650, 700-730 nm regions were repeated on days with highest coefficient of determination. Moreover, 46-48 DAP with high coefficient of determination for yield prediction were repeated in five out of six data sets (two early stages, each three data sets). On the other hand, the harvest maturity comparison between the greenhouse study and the UAS data sets showed that similar identified wavelengths reside in ∌450, ∌530, ∌715, and ∌760 nm regions, with performance metric (F1 score) of 0.78, 0.84, and 0.9 for greenhouse, 2019 UAS, and 2020 UAS data, respectively. However, the incorporated noise in the captured data from the UAS study, along with the high computational cost of the classical mathematical approach employed for denoising hyperspectral data, have inspired us to leverage the computational performance of hyperspectral denoising by assessing the feasibility of transferring the learned concepts to deep learning models. In Chapter 8, we approached hyperspectral denoising in spectral domain (1D fashion) for two types of noise, integrated noise and non-independent and non-identically distributed (non-i.i.d.) noise. We utilized Memory Networks due to their power in image denoising for hyperspectral denoising, introduced a new loss and benchmarked it against several data sets and models. The proposed model, HypeMemNet, ranked first - up to 40% in terms of signal-to-noise ratio (SNR) for resolving integrated noise, and first or second, by a small margin for resolving non-i.i.d. noise. Our findings showed that a proper receptive field and a suitable number of filters are crucial for denoising integrated noise, while parameter size was shown to be of the highest importance for non-i.i.d. noise. Results from the conducted studies provide a comprehensive understanding encompassing yield modeling, harvest scheduling, and hyperspectral denoising. Our findings bode well for transitioning from an expensive hyperspectral imager to a multispectral imager, tuned with the identified bands, as well as employing a rapid deep learning model for hyperspectral denoising

    Automated and robust geometric and spectral fusion of multi-sensor, multi-spectral satellite images

    Get PDF
    Die in den letzten Jahrzehnten aufgenommenen Satellitenbilder zur Erdbeobachtung bieten eine ideale Grundlage fĂŒr eine genaue LangzeitĂŒberwachung und Kartierung der ErdoberflĂ€che und AtmosphĂ€re. Unterschiedliche Sensoreigenschaften verhindern jedoch oft eine synergetische Nutzung. Daher besteht ein dringender Bedarf heterogene Multisensordaten zu kombinieren und als geometrisch und spektral harmonisierte Zeitreihen nutzbar zu machen. Diese Dissertation liefert einen vorwiegend methodischen Beitrag und stellt zwei neu entwickelte Open-Source-Algorithmen zur Sensorfusion vor, die grĂŒndlich evaluiert, getestet und validiert werden. AROSICS, ein neuer Algorithmus zur Co-Registrierung und geometrischen Harmonisierung von Multisensor-Daten, ermöglicht eine robuste und automatische Erkennung und Korrektur von Lageverschiebungen und richtet die Daten an einem gemeinsamen Koordinatengitter aus. Der zweite Algorithmus, SpecHomo, wurde entwickelt, um unterschiedliche spektrale Sensorcharakteristika zu vereinheitlichen. Auf Basis von materialspezifischen Regressoren fĂŒr verschiedene Landbedeckungsklassen ermöglicht er nicht nur höhere Transformationsgenauigkeiten, sondern auch die AbschĂ€tzung einseitig fehlender SpektralbĂ€nder. Darauf aufbauend wurde in einer dritten Studie untersucht, inwieweit sich die AbschĂ€tzung von BrandschĂ€den aus Landsat mittels synthetischer Red-Edge-BĂ€nder und der Verwendung dichter Zeitreihen, ermöglicht durch Sensorfusion, verbessern lĂ€sst. Die Ergebnisse zeigen die EffektivitĂ€t der entwickelten Algorithmen zur Verringerung von Inkonsistenzen bei Multisensor- und Multitemporaldaten sowie den Mehrwert einer geometrischen und spektralen Harmonisierung fĂŒr nachfolgende Produkte. Synthetische Red-Edge-BĂ€nder erwiesen sich als wertvoll bei der AbschĂ€tzung vegetationsbezogener Parameter wie z. B. Brandschweregraden. Zudem zeigt die Arbeit das große Potenzial zur genaueren Überwachung und Kartierung von sich schnell entwickelnden Umweltprozessen, das sich aus einer Sensorfusion ergibt.Earth observation satellite data acquired in recent years and decades provide an ideal data basis for accurate long-term monitoring and mapping of the Earth's surface and atmosphere. However, the vast diversity of different sensor characteristics often prevents synergetic use. Hence, there is an urgent need to combine heterogeneous multi-sensor data to generate geometrically and spectrally harmonized time series of analysis-ready satellite data. This dissertation provides a mainly methodical contribution by presenting two newly developed, open-source algorithms for sensor fusion, which are both thoroughly evaluated as well as tested and validated in practical applications. AROSICS, a novel algorithm for multi-sensor image co-registration and geometric harmonization, provides a robust and automated detection and correction of positional shifts and aligns the data to a common coordinate grid. The second algorithm, SpecHomo, was developed to unify differing spectral sensor characteristics. It relies on separate material-specific regressors for different land cover classes enabling higher transformation accuracies and the estimation of unilaterally missing spectral bands. Based on these algorithms, a third study investigated the added value of synthesized red edge bands and the use of dense time series, enabled by sensor fusion, for the estimation of burn severity and mapping of fire damage from Landsat. The results illustrate the effectiveness of the developed algorithms to reduce multi-sensor, multi-temporal data inconsistencies and demonstrate the added value of geometric and spectral harmonization for subsequent products. Synthesized red edge information has proven valuable when retrieving vegetation-related parameters such as burn severity. Moreover, using sensor fusion for combining multi-sensor time series was shown to offer great potential for more accurate monitoring and mapping of quickly evolving environmental processes

    Assessing remote sensing application on rangeland insurance in Canadian prairies

    Get PDF
    Part of the problem with implementing a rangeland insurance program is that the acreage of different pasture types, which is required in order to determine an indemnity payment, is difficult to measure on the ground over large areas. Remote sensing techniques provide a potential solution to this problem. This study applied single-date SPOT (Satellite Pour I’Observation de la Terre) imagery, field collected data, and geographic information system (GIS) data to study the classification of land cover and vegetation at species level. Two topographic correction models, Minnaert model and C-correction, and two classifying algorithms, maximum likelihood classifier (MLC) and artificial neural network (ANN), were evaluated. The feasibility of discriminating invasive crested wheatgrass from natives was investigated, and an exponential normalized difference vegetation index (ExpNDMI) was developed to increase the separability between crested wheatgrass and natives. Spectral separability index (SSI) was used to select proper bands and vegetation indices for classification. The results show that topographic corrections can be effective to reduce intra-class rediometric variation caused by topographic effect in the study area and improve the classification. An overall accuracy of 90.5% was obtained by MLC using Minnaert model corrected reflectance, and MLC obtained higher classification accuracy (~5%) than back-propagation based ANN. Topographic correction can reduce intra-class variation and improve classification accuracy at about 4% comparing to the original reflectance. The crested wheatgrass was over-estimated in this study, and the result indicated that single-date SPOT 5 image could not classify crested wheatgrass with satisfactory accuracy. However, the proposed ExpNDMI can reduce intra-class variation and enlarge inter-class variation, further, improve the ability to discriminate invasive crested wheatgrass from natives at 4% of overall accuracy. This study revealed that single-date SPOT image may perform an effective classification on land cover, and will provide a useful tool to update the land cover information in order to implement a rangeland insurance program

    Potential Fossil Yield Classification (PFYC) Survey of Nevada Surficial Geology, and a Multi-Sensor, Remote Sensing, Change-Detection Study of Land-Use/Land-Cover Urbanization Impacting the Las Vegas Formation Located in Northwestern Las Vegas Valley

    Full text link
    This thesis is a combination of two separate but related projects. The first project is a Potential Fossil Yield Classification (PYFC) survey. The PFYC is a Bureau of Land Management funded survey designed to synthesize paleontologic information into a geographic information system (GIS) as a distributable geodatabase. The database is designed to represent surficial geologic deposits contained in a polygon shapefile. Throughout the State of Nevada each polygon represents a mapped geologic unit at a scale of at least 1:250 k. Each mapped geologic unit is then assigned a “potential fossil yield classification”, a numerical ranking value of 1-5 based on the known fossils within a geologic unit. Fossil type and abundance are considered in the assignment of a PFYC value, 1 being the lowest, and 5 being the highest. The second project consists of a multi-temporal land-use/land-cover change detection analysis designed to measure effects of rapid urbanization within a geologic unit identified to have the highest fossil potential based on the results of the PFYC survey. The Las Vegas Formation (LVfm) is a Pleistocene groundwater discharge deposit that has been shown to contain significant vertebrate fossils, thus being assigned a PFYC value of 5. The proximity of the LVfm to the densely populated city of Las Vegas provides a unique opportunity quantify effects of urbanization to lands rich with fossil resources. This project is designed to utilize remotely sensed imagery and aerial light detection and ranging (LiDAR) point clouds to accurately quantify urbanization effect
    • 

    corecore