382 research outputs found

    Techniques and Challenges of the Machine Learning Method for Land Use/Land Cover (LU/LC) Classification in Remote Sensing Using the Google Earth Engine

    Get PDF
    In order to accurately observe the globe, land use and land cover are crucial. Due to the proliferation of several global modifications associated with the existence of the planet, land use/land cover (LU/LC) classification is now regarded as a topic of highest significance in the natural environment and an important field to be researched by researchers. Google Earth provides satellite image dataset which contains high-resolution images; these images are used to analyze the land area. In order to address the dearth of review articles throughout the land use/land cover classification phase, we proposed a full evaluation, which might help researchers continue their work. Therefore, the purpose of this study is to investigate the methodical steps involved in classifying land use and land cover utilizing the Google Earth platform. The most widely used techniques researchers employ to achieve LU/LC classification using Google Earth Engine are examined in this work. The classification of land use and land cover for a specific region using time series was covered in this study, along with the many types of land use and land cover classes and the approach employed by Google Earth. The limits of the GEE tool and difficulties encountered during the process of classifying land use and cover have also been covered in this survey document. The importance of this review rests in inspiring future scholars to tackle the LU/LC analysis problem successfully, and this study offers researchers a road map for assessing land use/land cover classification

    Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones

    Get PDF
    This paper proposes a novel framework for fusing multi-temporal, multispectral satellite images and OpenStreetMap (OSM) data for the classification of local climate zones (LCZs). Feature stacking is the most commonly-used method of data fusion but does not consider the heterogeneity of multimodal optical images and OSM data, which becomes its main drawback. The proposed framework processes two data sources separately and then combines them at the model level through two fusion models (the landuse fusion model and building fusion model), which aim to fuse optical images with landuse and buildings layers of OSM data, respectively. In addition, a new approach to detecting building incompleteness of OSM data is proposed. The proposed framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion Contest, and further validated on one additional test set containing test samples which are manually labeled in Munich and New York. Experimental results have indicated that compared to the feature stacking-based baseline framework the proposed framework is effective in fusing optical images with OSM data for the classification of LCZs with high generalization capability on a large scale. The classification accuracy of the proposed framework outperforms the baseline framework by more than 6% and 2%, while testing on the test set of 2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively. In addition, the proposed framework is less sensitive to spectral diversities of optical satellite images and thus achieves more stable classification performance than state-of-the art frameworks.Comment: accepted by TGR

    Generation of a Land Cover Atlas of environmental critic zones using unconventional tools

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Land use/land cover classification using machine learning models

    Get PDF
    An ensemble model has been proposed in this work by combining the extreme gradient boosting classification (XGBoost) model with support vector machine (SVM) for land use and land cover classification (LULCC). We have used the multispectral Landsat-8 operational land imager sensor (OLI) data with six spectral bands in the electromagnetic spectrum (EM). The area of study is the administrative boundary of the twin cities of Odisha. Data collected in 2020 is classified into seven land use classes/labels: river, canal, pond, forest, urban, agricultural land, and sand. Comparative assessments of the results of ten machine learning models are accomplished by computing the overall accuracy, kappa coefficient, producer accuracy and user accuracy. An ensemble classifier model makes the classification more precise than the other state-of-the-art machine learning classifiers

    Improvement in Land Cover and Crop Classification based on Temporal Features Learning from Sentinel-2 Data Using Recurrent-Convolutional Neural Network (R-CNN)

    Get PDF
    Understanding the use of current land cover, along with monitoring change over time, is vital for agronomists and agricultural agencies responsible for land management. The increasing spatial and temporal resolution of globally available satellite images, such as provided by Sentinel-2, creates new possibilities for researchers to use freely available multi-spectral optical images, with decametric spatial resolution and more frequent revisits for remote sensing applications such as land cover and crop classification (LC&CC), agricultural monitoring and management, environment monitoring. Existing solutions dedicated to cropland mapping can be categorized based on per-pixel based and object-based. However, it is still challenging when more classes of agricultural crops are considered at a massive scale. In this paper, a novel and optimal deep learning model for pixel-based LC&CC is developed and implemented based on Recurrent Neural Networks (RNN) in combination with Convolutional Neural Networks (CNN) using multi-temporal sentinel-2 imagery of central north part of Italy, which has diverse agricultural system dominated by economic crop types. The proposed methodology is capable of automated feature extraction by learning time correlation of multiple images, which reduces manual feature engineering and modeling crop phenological stages. Fifteen classes, including major agricultural crops, were considered in this study. We also tested other widely used traditional machine learning algorithms for comparison such as support vector machine SVM, random forest (RF), Kernal SVM, and gradient boosting machine, also called XGBoost. The overall accuracy achieved by our proposed Pixel R-CNN was 96.5%, which showed considerable improvements in comparison with existing mainstream methods. This study showed that Pixel R-CNN based model offers a highly accurate way to assess and employ time-series data for multi-temporal classification tasks

    Machine learning and high spatial resolution multi-temporal Sentinel-2 imagery for crop type classification

    Get PDF
    Thesis (MPhil)--Stellenbosch University, 2019.ENGLISH SUMMARY : Spatially-explicit crop type information is useful for estimating agricultural production areas. Such information is used for various monitoring and decision-making applications, including crop insurance, food supply-demand logistics, commodity market forecasting and environmental modelling. Traditional methods, such as ground surveys and agricultural censuses, involve high production costs and are often labour intensive, which limit their use for timely and accurate crop type data production. Remote sensing, however, offers a dependable, cost-effective and timely way of mapping crop types. Although remote sensing approaches – particularly using multitemporal techniques – have been successfully employed for producing crop type information, this information is mostly available post-harvest. Thus, researchers and decision-makers have to wait several months after harvest to have such information, which is usually too late for many applications. The availability and accessibility of imagery collected with optical sensors make such data preferable for mapping crop types. However, these sensors are subject to cloud-interference, which has been recognised as a source of error in the retrieval of surface parameters. It is therefore important to assess the strengths and weaknesses of using multi-temporal optical imagery for differentiating crop types. This study utilises Sentinel-2A and 2B imagery to perform several experiments in selected parts of the Western Cape, South Africa, to undertake this assessment. The first three experiments assessed the significance of image selection on the accuracies of crop type classification. A recommended number of Sentinel-2 images was selected, using two different methods. The first of the three experiments was conducted with uni-temporal images. Based on the performance rankings of the uni-temporal images, five images with the highest ranks were used to set up Experiment 2. The third experiment was undertaken with a handpicked set of five images, based on crop developmental stages. The two image selection methods were compared to each other and subsequently to the entire time-series, to determine the significance of selecting images for crop type mapping. These classifications were undertaken with several supervised machine learning classifiers and one parametric classifier. Results showed no significant difference in classification accuracies between the two image selection methods and the entire time-series. Overall, the support vector machine (SVM) and random forest (RF) algorithms outperformed all the other classifiers. The fourth experiment was undertaken by chronologically adding images to the classifiers. The progression of classification accuracies against time and the increase in the number of images were analysed to determine the earliest period (pre-harvest) when crops can be classified with sufficient accuracies. The highest pre-harvest accuracy achieved was then compared to that obtained at the end of the season, including images acquired post-harvest, to assess the effectiveness of machine learning classifiers for classifying crop types when only pre-harvest images are used. The results of this experiment showed that machine learning classifiers can classify crops when only preharvest images are used, with accuracies similar to those obtained when the entire time-series is used. Satisfactory classification accuracies were attainable as early as Aug/Sept (eight weeks before harvest). The fifth to tenth experiments were undertaken to assess the impact of cloud cover and image compositing on crop type classification accuracies. The fifth and sixth experiments were performed with non-composited images. Experiment Five (5) was undertaken with cloud-free images only, while the sixth experiment involved using all available images, including cloudcontaminated observations. The seventh to tenth experiments were undertaken with monthly image composites computed using four different image compositing approaches. All these experiments were undertaken using several machine learning classifiers. The results showed that machine learning classifiers performed best when all images – including cloud-contaminated images – are used as input to the classifiers. Image compositing had a detrimental effect on classification accuracies. Generally, multi-temporal Sentinel-2 data hold great potential for operational crop type map production early in the season. However, more work is needed to develop simple workflows for eliminating cloud cover, particularly for crop type mapping in areas characterised by frequent overcast conditions.AFRIKAANSE OPSOMMING : Eksperiment 2 op te stel. Die derde eksperiment is gedoen met ’n uitgesoekte stel van vyf beelde, gebaseer op stadiums van gewasontwikkeling. Die twee beeldseleksiemetodes is met mekaar vergelyk en gevolglik met die hele tydreeks, om te bepaal wat die betekenis daarvan is om beelde te kies vir gewastipe-kartering. Hierdie klassifikasies is onderneem met verskeie masjienlerende klassifiseerders en een parametriese klassifiseerder, onder toesig. Resultate het geen beduidende verskil in klassifikasie-akkuraathede gewys tussen die twee beeldseleksiemetodes en die algehele tydreeks nie. In die geheel het die steunvektormasjien- (SVM) en lukrake-woud- (“random forest”, RF) -algoritmes beter presteer as al die ander klassifiseerders. Die vierde eksperiment is onderneem deur beelde chronologies by die klassifiseerders te voeg. Die progressie van klassifikasie-akkuraathede teenoor tyd en die toename in die aantal beelde is geanaliseer om die vroegste periode (voor-oes) te bepaal wanneer gewasse met voldoende akkuraathede geklassifiseer kan word. Die hoogste voor-oes-akkuraatheid is toe vergelyk met dit wat teen die end van die seisoen behaal is, insluitend beelde wat na-oes ingesamel is, om die doeltreffendheid van masjienlerende klassifiseerders te bepaal by die klassifisering van gewastipes wanneer slegs voor-oes-beelde gebruik is. Die resultate van hierdie eksperiment het gewys dat masjienlerende klassifiseerders gewasse kan klassifiseer wanneer slegs voor-oes-beelde gebruik is, met akkuraathede wat soortgelyk is aan dit wat behaal is wanneer die hele tydreeks gebruik is. Bevredigende klassifikasie-akkuraathede is so vroeg as Aug/Sep behaal (agt weke voor oes). Die vyfde tot tiende eksperimente is onderneem om die impak van wolkbedekking en beeldsamestelling op klassifikasie-akkuraathede van gewastipes te bepaal. Die vyfde en sesde eksperimente is met nie-saamgestelde beelde uitgevoer. Eksperiment Vyf (5) is slegs met wolkvrye beelde gedoen, terwyl die sesde eksperiment die gebruik van alle beskikbare beelde, insluitend wolkgekontamineerde observasies, betrek het. Die sewende tot tiende eksperimente is onderneem met maandelikse beeldsamestellings wat bereken is deur middel van die gebruik van vier verskillende benaderings tot beeldsamestelling. Al hierdie eksperimente is met behulp van verskeie masjienlerende klassifiseerders uitgevoer. Die resultate het gewys dat masjienlerende klassifiseerders die beste presteer het wanneer alle beelde – insluitend wolkgekontamineerde beelde – as invoer aan die klassifiseerders gebruik word. Beeldsamestelling het ’n nadelige uitwerking op klassifikasie-akkuraathede gehad. Oor die algemeen het multitemporale Sentinel-2-data vroeg in die seisoen goeie potensiaal vir operasionele gewastipe-kaartproduksie. Meer werk is nietemin nodig om eenvoudige werkvloei te ontwikkel om wolkbedekking te elimineer, veral vir gewastipe-kartering in areas wat gereeld gekenmerk word deur oortrokke toestande.Master

    Sentinel-2 images for detection of wind damage in forestry

    Get PDF
    Using of Remote sensing for the sake of Earth Observation is getting more and more popular as the number of satellites that are able to measure electromagnetic radiation with a higher spatial, temporal and radiometric resolution is considerably rising. Of all usage of Earth Observation, detection of disturbances caused by natural catastrophe such as wind, earthquake and fire is highly important. On 12th of August 2017, a storm hit South and South East of Finland, bringing harsh disturbances to the forest area in which Pine and Spruce were the main types of land cover. The study area in this region contained the extent of a sentinel-2 image that covered an area of 100 km by 100 km. Two sentinel-2 images from 11th of August 2017 and 5th of September 2017 were used to measure spectra behavior of existing features before and after storm in the region. Forest use notifications data, by which damaged stands were identified, and forest-stand dataset, with which stands that were not touched by the storm (undamaged stands) were characterized, were used as ground truth data. For change extraction, univariate image differencing was used using six different indices, namely EVI, NDVI, NDMI, SATVI, TCB, and TCG. Two main approaches were taken in this thesis, namely pixelwise and average based, where in the former individual pixels were extracted (from stands) and used for training the models while in the later average of pixels inside each stand was calculated and used for training. Results achieved by average-based showed a better performance in terms of user accuracy and stability of the results than pixelwise approach did

    Continental-scale land cover mapping at 10 m resolution over Europe (ELC10)

    Get PDF
    Widely used European land cover maps such as CORINE are produced at medium spatial resolutions (100 m) and rely on diverse data with complex workflows requiring significant institutional capacity. We present a high resolution (10 m) land cover map (ELC10) of Europe based on a satellite-driven machine learning workflow that is annually updatable. A Random Forest classification model was trained on 70K ground-truth points from the LUCAS (Land Use/Cover Area frame Survey) dataset. Within the Google Earth Engine cloud computing environment, the ELC10 map can be generated from approx. 700 TB of Sentinel imagery within approx. 4 days from a single research user account. The map achieved an overall accuracy of 90% across 8 land cover classes and could account for statistical unit land cover proportions within 3.9% (R2 = 0.83) of the actual value. These accuracies are higher than that of CORINE (100 m) and other 10-m land cover maps including S2GLC and FROM-GLC10. We found that atmospheric correction of Sentinel-2 and speckle filtering of Sentinel-1 imagery had minimal effect on enhancing classification accuracy (< 1%). However, combining optical and radar imagery increased accuracy by 3% compared to Sentinel-2 alone and by 10% compared to Sentinel-1 alone. The conversion of LUCAS points into homogenous polygons under the Copernicus module increased accuracy by <1%, revealing that Random Forests are robust against contaminated training data. Furthermore, the model requires very little training data to achieve moderate accuracies - the difference between 5K and 50K LUCAS points is only 3% (86 vs 89%). At 10-m resolution, the ELC10 map can distinguish detailed landscape features like hedgerows and gardens, and therefore holds potential for aerial statistics at the city borough level and monitoring property-level environmental interventions (e.g. tree planting)

    Advancements in Multi-temporal Remote Sensing Data Analysis Techniques for Precision Agriculture

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore