293 research outputs found

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    CubeSat constellations provide enhanced crop phenology and digital agricultural insights using daily leaf area index retrievals

    Get PDF
    Satellite remote sensing has great potential to deliver on the promise of a data-driven agricultural revolution, with emerging space-based platforms providing spatiotemporal insights into precisionlevel attributes such as crop water use, vegetation health and condition and crop response to management practices. Using a harmonized collection of high-resolution Planet CubeSat, Sentinel-2, Landsat-8 and additional coarser resolution imagery from MODIS and VIIRS, we exploit a multisatellite data fusion and machine learning approach to deliver a radiometrically calibrated and gap-filled time-series of daily leaf area index (LAI) at an unprecedented spatial resolution of 3 m. The insights available from such high-resolution CubeSat-based LAI data are demonstrated through tracking the growth cycle of a maize crop and identifying observable within-field spatial and temporal variations across key phenological stages. Daily LAI retrievals peaked at the tasseling stage, demonstrating their value for fertilizer and irrigation scheduling. An evaluation of satellite-based retrievals against field-measured LAI data collected from both rain-fed and irrigated fields shows high correlation and captures the spatiotemporal development of intra- and inter-field variations. Novel agricultural insights related to individual vegetative and reproductive growth stages were obtained, showcasing the capacity for new high-resolution CubeSat platforms to deliver actionable intelligence for precision agricultural and related applications

    Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas

    Get PDF
    In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hypersharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation

    Predicting forest cover in distinct ecosystems: the potential of multi-source sentinel-1 and -2 data fusion

    Get PDF
    The fusion of microwave and optical data sets is expected to provide great potential for the derivation of forest cover around the globe. As Sentinel-1 and Sentinel-2 are now both operating in twin mode, they can provide an unprecedented data source to build dense spatial and temporal high-resolution time series across a variety of wavelengths. This study investigates (i) the ability of the individual sensors and (ii) their joint potential to delineate forest cover for study sites in two highly varied landscapes located in Germany (temperate dense mixed forests) and South Africa (open savanna woody vegetation and forest plantations). We used multi-temporal Sentinel-1 and single time steps of Sentinel-2 data in combination to derive accurate forest/non-forest (FNF) information via machine-learning classifiers. The forest classification accuracies were 90.9% and 93.2% for South Africa and Thuringia, respectively, estimated while using autocorrelation corrected spatial cross-validation (CV) for the fused data set. Sentinel-1 only classifications provided the lowest overall accuracy of 87.5%, while Sentinel-2 based classifications led to higher accuracies of 91.9%. Sentinel-2 short-wave infrared (SWIR) channels, biophysical parameters (Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)) and the lower spectrum of the Sentinel-1 synthetic aperture radar (SAR) time series were found to be most distinctive in the detection of forest cover. In contrast to homogenous forests sites, Sentinel-1 time series information improved forest cover predictions in open savanna-like environments with heterogeneous regional features. The presented approach proved to be robust and it displayed the benefit of fusing optical and SAR data at high spatial resolution

    Big Earth Data and Machine Learning for Sustainable and Resilient Agriculture

    Full text link
    Big streams of Earth images from satellites or other platforms (e.g., drones and mobile phones) are becoming increasingly available at low or no cost and with enhanced spatial and temporal resolution. This thesis recognizes the unprecedented opportunities offered by the high quality and open access Earth observation data of our times and introduces novel machine learning and big data methods to properly exploit them towards developing applications for sustainable and resilient agriculture. The thesis addresses three distinct thematic areas, i.e., the monitoring of the Common Agricultural Policy (CAP), the monitoring of food security and applications for smart and resilient agriculture. The methodological innovations of the developments related to the three thematic areas address the following issues: i) the processing of big Earth Observation (EO) data, ii) the scarcity of annotated data for machine learning model training and iii) the gap between machine learning outputs and actionable advice. This thesis demonstrated how big data technologies such as data cubes, distributed learning, linked open data and semantic enrichment can be used to exploit the data deluge and extract knowledge to address real user needs. Furthermore, this thesis argues for the importance of semi-supervised and unsupervised machine learning models that circumvent the ever-present challenge of scarce annotations and thus allow for model generalization in space and time. Specifically, it is shown how merely few ground truth data are needed to generate high quality crop type maps and crop phenology estimations. Finally, this thesis argues there is considerable distance in value between model inferences and decision making in real-world scenarios and thereby showcases the power of causal and interpretable machine learning in bridging this gap.Comment: Phd thesi

    Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation

    Get PDF
    Spatiotemporal satellite image fusion (STIF) has been widely applied in land surface monitoring to generate high spatial and high temporal reflectance images from satellite sensors. This paper proposed a new unmixing-based spatiotemporal fusion method that is composed of a self-trained random forest machine learning regression (R), low resolution (LR) endmember estimation (E), high resolution (HR) surface reflectance image reconstruction (R), and residual compensation (C), that is, RERC. RERC uses a self-trained random forest to train and predict the relationship between spectra and the corresponding class fractions. This process is flexible without any ancillary training dataset, and does not possess the limitations of linear spectral unmixing, which requires the number of endmembers to be no more than the number of spectral bands. The running time of the random forest regression is about ~1% of the running time of the linear mixture model. In addition, RERC adopts a spectral reflectance residual compensation approach to refine the fused image to make full use of the information from the LR image. RERC was assessed in the fusion of a prediction time MODIS with a Landsat image using two benchmark datasets, and was assessed in fusing images with different numbers of spectral bands by fusing a known time Landsat image (seven bands used) with a known time very-high-resolution PlanetScope image (four spectral bands). RERC was assessed in the fusion of MODIS-Landsat imagery in large areas at the national scale for the Republic of Ireland and France. The code is available at https://www.researchgate.net/proiile/Xiao_Li52

    Reviewing the potential of Sentinel-2 in assessing the drought

    Get PDF
    This paper systematically reviews the potential of the Sentinel-2 (A and B) in assessing drought. Research findings, including the IPCC reports, highlighted the increasing trend in drought over the decades and the need for a better understanding and assessment of this phenomenon. Continuous monitoring of the Earth’s surface is an efficient method for predicting and identifying the early warnings of drought, which enables us to prepare and plan the mitigation procedures. Considering the spatial, temporal, and spectral characteristics, the freely available Sentinel-2 data products are a promising option in this area of research, compared to Landsat and MODIS. This paper evaluates the recent developments in this field induced by the launch of Sentinel-2, as well as the comparison with other existing data products. The objective of this paper is to evaluate the potential of Sentinel-2 in assessing drought through vegetation characteristics, soil moisture, evapotranspiration, surface water including wetland, and land use and land cover analysis. Furthermore, this review also addresses and compares various data fusion methods and downscaling methods applied to Sentinel-2 for retrieving the major bio-geophysical variables used in the analysis of drought. Additionally, the limitations of Sentinel-2 in its direct applicability to drought studies are also evaluated

    Automated and robust geometric and spectral fusion of multi-sensor, multi-spectral satellite images

    Get PDF
    Die in den letzten Jahrzehnten aufgenommenen Satellitenbilder zur Erdbeobachtung bieten eine ideale Grundlage für eine genaue Langzeitüberwachung und Kartierung der Erdoberfläche und Atmosphäre. Unterschiedliche Sensoreigenschaften verhindern jedoch oft eine synergetische Nutzung. Daher besteht ein dringender Bedarf heterogene Multisensordaten zu kombinieren und als geometrisch und spektral harmonisierte Zeitreihen nutzbar zu machen. Diese Dissertation liefert einen vorwiegend methodischen Beitrag und stellt zwei neu entwickelte Open-Source-Algorithmen zur Sensorfusion vor, die gründlich evaluiert, getestet und validiert werden. AROSICS, ein neuer Algorithmus zur Co-Registrierung und geometrischen Harmonisierung von Multisensor-Daten, ermöglicht eine robuste und automatische Erkennung und Korrektur von Lageverschiebungen und richtet die Daten an einem gemeinsamen Koordinatengitter aus. Der zweite Algorithmus, SpecHomo, wurde entwickelt, um unterschiedliche spektrale Sensorcharakteristika zu vereinheitlichen. Auf Basis von materialspezifischen Regressoren für verschiedene Landbedeckungsklassen ermöglicht er nicht nur höhere Transformationsgenauigkeiten, sondern auch die Abschätzung einseitig fehlender Spektralbänder. Darauf aufbauend wurde in einer dritten Studie untersucht, inwieweit sich die Abschätzung von Brandschäden aus Landsat mittels synthetischer Red-Edge-Bänder und der Verwendung dichter Zeitreihen, ermöglicht durch Sensorfusion, verbessern lässt. Die Ergebnisse zeigen die Effektivität der entwickelten Algorithmen zur Verringerung von Inkonsistenzen bei Multisensor- und Multitemporaldaten sowie den Mehrwert einer geometrischen und spektralen Harmonisierung für nachfolgende Produkte. Synthetische Red-Edge-Bänder erwiesen sich als wertvoll bei der Abschätzung vegetationsbezogener Parameter wie z. B. Brandschweregraden. Zudem zeigt die Arbeit das große Potenzial zur genaueren Überwachung und Kartierung von sich schnell entwickelnden Umweltprozessen, das sich aus einer Sensorfusion ergibt.Earth observation satellite data acquired in recent years and decades provide an ideal data basis for accurate long-term monitoring and mapping of the Earth's surface and atmosphere. However, the vast diversity of different sensor characteristics often prevents synergetic use. Hence, there is an urgent need to combine heterogeneous multi-sensor data to generate geometrically and spectrally harmonized time series of analysis-ready satellite data. This dissertation provides a mainly methodical contribution by presenting two newly developed, open-source algorithms for sensor fusion, which are both thoroughly evaluated as well as tested and validated in practical applications. AROSICS, a novel algorithm for multi-sensor image co-registration and geometric harmonization, provides a robust and automated detection and correction of positional shifts and aligns the data to a common coordinate grid. The second algorithm, SpecHomo, was developed to unify differing spectral sensor characteristics. It relies on separate material-specific regressors for different land cover classes enabling higher transformation accuracies and the estimation of unilaterally missing spectral bands. Based on these algorithms, a third study investigated the added value of synthesized red edge bands and the use of dense time series, enabled by sensor fusion, for the estimation of burn severity and mapping of fire damage from Landsat. The results illustrate the effectiveness of the developed algorithms to reduce multi-sensor, multi-temporal data inconsistencies and demonstrate the added value of geometric and spectral harmonization for subsequent products. Synthesized red edge information has proven valuable when retrieving vegetation-related parameters such as burn severity. Moreover, using sensor fusion for combining multi-sensor time series was shown to offer great potential for more accurate monitoring and mapping of quickly evolving environmental processes
    • …
    corecore