549 research outputs found

    Enhancing Surface Soil Moisture Estimation through Integration of Artificial Neural Networks Machine Learning and Fusion of Meteorological, Sentinel-1A and Sentinel-2A Satellite Data

    Get PDF
    For many environmental and agricultural applications, an accurate estimation of surface soil moisture is essential. This study sought to determine whether combining Sentinel-1A, Sentinel-2A, and meteorological data with artificial neural networks (ANN) could improve soil moisture estimation in various land cover types. To train and evaluate the model’s performance, we used field data (provided by La Tuscia University) on the study area collected during time periods between October 2022, and December 2022. Surface soil moisture was measured at 29 locations. The performance of the model was trained, validated, and tested using input features in a 60:10:30 ratio, using the feed-forward ANN model. It was found that the ANN model exhibited high precision in predicting soil moisture. The model achieved a coefficient of determination (R2) of 0.71 and correlation coefficient (R) of 0.84. Furthermore, the incorporation of Random Forest (RF) algorithms for soil moisture prediction resulted in an improved R2 of 0.89. The unique combination of active microwave, meteorological data and multispectral data provides an opportunity to exploit the complementary nature of the datasets. Through preprocessing, fusion, and ANN modeling, this research contributes to advancing soil moisture estimation techniques and providing valuable insights for water resource management and agricultural planning in the study area

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions

    A CNN-based fusion method for feature extraction from sentinel data

    Get PDF
    Sensitivity to weather conditions, and specially to clouds, is a severe limiting factor to the use of optical remote sensing for Earth monitoring applications. A possible alternative is to benefit from weather-insensitive synthetic aperture radar (SAR) images. In many real-world applications, critical decisions are made based on some informative optical or radar features related to items such as water, vegetation or soil. Under cloudy conditions, however, optical-based features are not available, and they are commonly reconstructed through linear interpolation between data available at temporally-close time instants. In this work, we propose to estimate missing optical features through data fusion and deep-learning. Several sources of information are taken into account—optical sequences, SAR sequences, digital elevation model—so as to exploit both temporal and cross-sensor dependencies. Based on these data and a tiny cloud-free fraction of the target image, a compact convolutional neural network (CNN) is trained to perform the desired estimation. To validate the proposed approach, we focus on the estimation of the normalized difference vegetation index (NDVI), using coupled Sentinel-1 and Sentinel-2 time-series acquired over an agricultural region of Burkina Faso from May–November 2016. Several fusion schemes are considered, causal and non-causal, single-sensor or joint-sensor, corresponding to different operating conditions. Experimental results are very promising, showing a significant gain over baseline methods according to all performance indicators

    Remote Sensing and Data Fusion for Eucalyptus Trees Identification

    Get PDF
    Satellite remote sensing is supported by the extraction of data/information from satellite images or aircraft, through multispectral images, that allows their remote analysis and classification. Analyzing those images with data fusion tools and techniques, seem a suitable approach for the identification and classification of land cover. This land cover classification is possible because the fusion/merging techniques can aggregate various sources of heterogeneous information to generate value-added products that facilitate features classification and analysis. This work proposes to apply a data fusion algorithm, denoted FIF (Fuzzy Information Fusion), which combines computational intelligence techniques with multicriteria concepts and techniques to automatically distinguish Eucalyptus trees, in satellite images To assess the proposed approach, a Portuguese region, which includes planted Eucalyptus, will be used. This region is chosen because it includes a significant number of eucalyptus, and, currently, it is hard to automatically distinguish them from other types of trees (through satellite images), which turns this study into an interesting experiment of using data fusion techniques to differentiate types of trees. Further, the proposed approach is tested and validated with several fusion/aggregation operators to verify its versatility. Overall, the results of the study demonstrate the potential of this approach for automatic classification of land types.A deteção remota de imagens de satélite é baseada na extração de dados / informações de imagens de satélite ou aeronaves, através de imagens multiespectrais, que permitem a sua análise e classificação. Quando estas imagens são analisadas com ferramentas e técnicas de fusão de dados, torna-se num método muito útil para a identificação e classificação de diferentes tipos de ocupação de solo. Esta classificação é possível porque as técnicas de fusão podem processar várias fontes de informações heterogéneas, procedendo depois à sua agregação, para gerar produtos de valor agregado que facilitam a classificação e análise de diferentes entidades - neste caso a deteção de eucaliptos. Esta dissertação propõe a utilização de um algoritmo, denominado FIF (Fuzzy Information Fusion), que combina técnicas de inteligência computacional com conceitos e técnicas multicritério. Para avaliar o trabalho proposto, será utilizada uma região portuguesa, que inclui uma vasta área de eucaliptos. Esta região foi escolhida porque inclui um número significativo de eucaliptos e, atualmente, é difícil diferenciá-los automaticamente de outros tipos de árvores (através de imagens de satélite), o que torna este estudo numa experiência interessante relativamente ao uso de técnicas de fusão de dados para diferenciar tipos de árvores. Além disso, o trabalho desenvolvido será testado com vários operadores de fusão/agregação para verificar sua versatilidade. No geral, os resultados do estudo demonstram o potencial desta abordagem para a classificação automática de diversos tipos de ocupação de solo (e.g. água, árvores, estradas etc)

    A systematic review of the use of Deep Learning in Satellite Imagery for Agriculture

    Full text link
    Agricultural research is essential for increasing food production to meet the requirements of an increasing population in the coming decades. Recently, satellite technology has been improving rapidly and deep learning has seen much success in generic computer vision tasks and many application areas which presents an important opportunity to improve analysis of agricultural land. Here we present a systematic review of 150 studies to find the current uses of deep learning on satellite imagery for agricultural research. Although we identify 5 categories of agricultural monitoring tasks, the majority of the research interest is in crop segmentation and yield prediction. We found that, when used, modern deep learning methods consistently outperformed traditional machine learning across most tasks; the only exception was that Long Short-Term Memory (LSTM) Recurrent Neural Networks did not consistently outperform Random Forests (RF) for yield prediction. The reviewed studies have largely adopted methodologies from generic computer vision, except for one major omission: benchmark datasets are not utilised to evaluate models across studies, making it difficult to compare results. Additionally, some studies have specifically utilised the extra spectral resolution available in satellite imagery, but other divergent properties of satellite images - such as the hugely different scales of spatial patterns - are not being taken advantage of in the reviewed studies.Comment: 25 pages, 2 figures and lots of large tables. Supplementary materials section included here in main pd

    Satellite-based estimation of soil organic carbon in Portuguese grasslands

    Get PDF
    Soil organic carbon (SOC) sequestration is one of the main ecosystem services provided by well-managed grasslands. In the Mediterranean region, sown biodiverse pastures (SBP) rich in legumes are a nature-based, innovative, and economically competitive livestock production system. As a co-benefit of increased yield, they also contribute to carbon sequestration through SOC accumulation. However, SOC monitoring in SBP require time-consuming and costly field work. Methods: In this study, we propose an expedited and cost-effective indirect method to estimate SOC content. In this study, we developed models for estimating SOC concentration by combining remote sensing (RS) and machine learning (ML) approaches. We used field-measured data collected from nine different farms during four production years (between 2017 and 2021). We utilized RS data from both Sentinel-1 and Sentinel-2, including reflectance bands and vegetation indices. We also used other covariates such as climatic, soil, and terrain variables, for a total of 49 inputs. To reduce multicollinearity problems between the different variables, we performed feature selection using the sequential feature selection approach. We then estimated SOC content using both the complete dataset and the selected features. Multiple ML methods were tested and compared, including multiple linear regression (MLR), random forests (RF), extreme gradient boosting (XGB), and artificial neural networks (ANN). We used a random cross-validation approach (with 10 folds). To find the hyperparameters that led to the best performance, we used a Bayesian optimization approach. Results: Results showed that the XGB method led to higher estimation accuracy than the other methods, and the estimation performance was not significantly influenced by the feature selection approach. For XGB, the average root mean square error (RMSE), measured on the test set among all folds, was 2.78 g kg−1 (r2 equal to 0.68) without feature selection, and 2.77 g kg−1 (r2 equal to 0.68) with feature selection (average SOC content is 13 g kg−1). The models were applied to obtain SOC content maps for all farms. Discussion: This work demonstrated that combining RS and ML can help obtain quick estimations of SOC content to assist with SBP managementinfo:eu-repo/semantics/publishedVersio

    Remote Sensing in Agriculture: State-of-the-Art

    Get PDF
    The Special Issue on “Remote Sensing in Agriculture: State-of-the-Art” gives an exhaustive overview of the ongoing remote sensing technology transfer into the agricultural sector. It consists of 10 high-quality papers focusing on a wide range of remote sensing models and techniques to forecast crop production and yield, to map agricultural landscape and to evaluate plant and soil biophysical features. Satellite, RPAS, and SAR data were involved. This preface describes shortly each contribution published in such Special Issue

    Deep Learning based data-fusion methods for remote sensing applications

    Get PDF
    In the last years, an increasing number of remote sensing sensors have been launched to orbit around the Earth, with a continuously growing production of massive data, that are useful for a large number of monitoring applications, especially for the monitoring task. Despite modern optical sensors provide rich spectral information about Earth's surface, at very high resolution, they are weather-sensitive. On the other hand, SAR images are always available also in presence of clouds and are almost weather-insensitive, as well as daynight available, but they do not provide a rich spectral information and are severely affected by speckle "noise" that make difficult the information extraction. For the above reasons it is worth and challenging to fuse data provided by different sources and/or acquired at different times, in order to leverage on their diversity and complementarity to retrieve the target information. Motivated by the success of the employment of Deep Learning methods in many image processing tasks, in this thesis it has been faced different typical remote sensing data-fusion problems by means of suitably designed Convolutional Neural Networks
    corecore