92 research outputs found

    Virtual image pair-based spatio-temporal fusion

    Get PDF
    Spatio-temporal fusion is a technique used to produce images with both fine spatial and temporal resolution. Generally, the principle of existing spatio-temporal fusion methods can be characterized by a unified framework of prediction based on two parts: (i) the known fine spatial resolution images (e.g., Landsat images), and (ii) the fine spatial resolution increment predicted from the available coarse spatial resolution increment (i.e., a downscaling process), that is, the difference between the coarse spatial resolution images (e.g., MODIS images) acquired at the known and prediction times. Owing to seasonal changes and land cover changes, there always exist large differences between images acquired at different times, resulting in a large increment and, further, great uncertainty in downscaling. In this paper, a virtual image pair-based spatio-temporal fusion (VIPSTF) approach was proposed to deal with this problem. VIPSTF is based on the concept of a virtual image pair (VIP), which is produced based on the available, known MODIS-Landsat image pairs. We demonstrate theoretically that compared to the known image pairs, the VIP is closer to the data at the prediction time. The VIP can capture more fine spatial resolution information directly from known images and reduce the challenge in downscaling. VIPSTF is a flexible framework suitable for existing spatial weighting- and spatial unmixing-based methods, and two versions VIPSTF-SW and VIPSTF-SU are, thus, developed. Experimental results on a heterogeneous site and a site experiencing land cover type changes show that both spatial weighting- and spatial unmixing-based methods can be enhanced by VIPSTF, and the advantage is particularly noticeable when the observed image pairs are temporally far from the prediction time. Moreover, VIPSTF is free of the need for image pair selection and robust to the use of multiple image pairs. VIPSTF is also computationally faster than the original methods when using multiple image pairs. The concept of VIP provides a new insight to enhance spatio-temporal fusion by making fuller use of the observed image pairs and reducing the uncertainty of estimating the fine spatial resolution increment. © 2020 Elsevier Inc

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Cloud removal from optical remote sensing images

    Full text link
    Optical remote sensing images used for Earth surface observations are constantly contaminated by cloud cover. Clouds dynamically affect the applications of optical data and increase the difficulty of image analysis. Therefore, cloud is considered as one of the sources of noise in optical image data, and its detection and removal need to be operated as a pre-processing step in most remote sensing image processing applications. This thesis investigates the current cloud detection and removal algorithms and develops three new cloud removal methods to improve the accuracy of the results. A thin cloud removal method based on signal transmission principles and spectral mixture analysis (ST-SMA) for pixel correction is developed in the first contribution. This method considers not only the additive reflectance from the clouds but also the energy absorption when solar radiation passes through them. Data correction is achieved by subtracting the product of the cloud endmember signature and the cloud abundance and rescaling according to the cloud thickness. The proposed method has no requirement for meteorological data and does not rely on reference images. The experimental results indicate that the proposed approach is able to perform effective removal of thin clouds in different scenarios. In the second study, an effective cloud removal method is proposed by taking advantage of the noise-adjusted principal components transform (CR-NAPCT). It is found that the signal-to-noise ratio (S/N) of cloud data is higher than data without cloud contamination, when spatial correlation is considered and are shown in the first NAPCT component (NAPC1) in the NAPCT data. An inverse transformation with a modified first component is then applied to generate the cloud free image. The effectiveness of the proposed method is assessed by performing experiments on simulated and real data to compare the quantitative and qualitative performance of the proposed approach. The third study of this thesis deals with both cloud and cloud shadow problems with the aid of an auxiliary image in a clear sky condition. A new cloud removal approach called multitemporal dictionary learning (MDL) is proposed. Dictionaries of the cloudy areas (target data) and the cloud free areas (reference data) are learned separately in the spectral domain. An online dictionary learning method is then applied to obtain the two dictionaries in this method. The removal process is conducted by using the coefficients from the reference image and the dictionary learned from the target image. This method is able to recover the data contaminated by thin and thick clouds or cloud shadows. The experimental results show that the MDL method is effective from both quantitative and qualitative viewpoints

    Spatiotemporal subpixel mapping of time-series images

    Get PDF
    Land cover/land use (LCLU) information extraction from multitemporal sequences of remote sensing imagery is becoming increasingly important. Mixed pixels are a common problem in Landsat and MODIS images that are used widely for LCLU monitoring. Recently developed subpixel mapping (SPM) techniques can extract LCLU information at the subpixel level by dividing mixed pixels into subpixels to which hard classes are then allocated. However, SPM has rarely been studied for time-series images (TSIs). In this paper, a spatiotemporal SPM approach was proposed for SPM of TSIs. In contrast to conventional spatial dependence-based SPM methods, the proposed approach considers simultaneously spatial and temporal dependences, with the former considering the correlation of subpixel classes within each image and the latter considering the correlation of subpixel classes between images in a temporal sequence. The proposed approach was developed assuming the availability of one fine spatial resolution map which exists among the TSIs. The SPM of TSIs is formulated as a constrained optimization problem. Under the coherence constraint imposed by the coarse LCLU proportions, the objective is to maximize the spatiotemporal dependence, which is defined by blending both spatial and temporal dependences. Experiments on three data sets showed that the proposed approach can provide more accurate subpixel resolution TSIs than conventional SPM methods. The SPM results obtained from the TSIs provide an excellent opportunity for LCLU dynamic monitoring and change detection at a finer spatial resolution than the available coarse spatial resolution TSIs

    Harmonization of Landsat and Sentinel 2 for Crop Monitoring in Drought Prone Areas: Case Studies of Ninh Thuan (Vietnam) and Bekaa (Lebanon)

    Get PDF
    Proper satellite-based crop monitoring applications at the farm-level often require near-daily imagery at medium to high spatial resolution. The combination of data from different ongoing satellite missions Sentinel 2 (ESA) and Landsat 7/8 (NASA) provides this unprecedented opportunity at a global scale; however, this is rarely implemented because these procedures are data demanding and computationally intensive. This study developed a robust stream processing for the harmonization of Landsat 7, Landsat 8 and Sentinel 2 in the Google Earth Engine cloud platform, connecting the benefit of coherent data structure, built-in functions and computational power in the Google Cloud. The harmonized surface reflectance images were generated for two agricultural schemes in Bekaa (Lebanon) and Ninh Thuan (Vietnam) during 2018–2019. We evaluated the performance of several pre-processing steps needed for the harmonization including the image co-registration, Bidirectional Reflectance Distribution Functions correction, topographic correction, and band adjustment. We found that the misregistration between Landsat 8 and Sentinel 2 images varied from 10 m in Ninh Thuan (Vietnam) to 32 m in Bekaa (Lebanon), and posed a great impact on the quality of the final harmonized data set if not treated. Analysis of a pair of overlapped L8-S2 images over the Bekaa region showed that, after the harmonization, all band-to-band spatial correlations were greatly improved. Finally, we demonstrated an application of the dense harmonized data set for crop mapping and monitoring. An harmonic (Fourier) analysis was applied to fit the detected unimodal, bimodal and trimodal shapes in the temporal NDVI patterns during one crop year in Ninh Thuan province. The derived phase and amplitude values of the crop cycles were combined with max-NDVI as an R-G-B false composite image. The final image was able to highlight croplands in bright colors (high phase and amplitude), while the non-crop areas were shown with grey/dark (low phase and amplitude). The harmonized data sets (with 30 m spatial resolution) along with the Google Earth Engine scripts used are provided for public use

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    NASA's surface biology and geology designated observable: A perspective on surface imaging algorithms

    Full text link
    The 2017–2027 National Academies' Decadal Survey, Thriving on Our Changing Planet, recommended Surface Biology and Geology (SBG) as a “Designated Targeted Observable” (DO). The SBG DO is based on the need for capabilities to acquire global, high spatial resolution, visible to shortwave infrared (VSWIR; 380–2500 nm; ~30 m pixel resolution) hyperspectral (imaging spectroscopy) and multispectral midwave and thermal infrared (MWIR: 3–5 μm; TIR: 8–12 μm; ~60 m pixel resolution) measurements with sub-monthly temporal revisits over terrestrial, freshwater, and coastal marine habitats. To address the various mission design needs, an SBG Algorithms Working Group of multidisciplinary researchers has been formed to review and evaluate the algorithms applicable to the SBG DO across a wide range of Earth science disciplines, including terrestrial and aquatic ecology, atmospheric science, geology, and hydrology. Here, we summarize current state-of-the-practice VSWIR and TIR algorithms that use airborne or orbital spectral imaging observations to address the SBG DO priorities identified by the Decadal Survey: (i) terrestrial vegetation physiology, functional traits, and health; (ii) inland and coastal aquatic ecosystems physiology, functional traits, and health; (iii) snow and ice accumulation, melting, and albedo; (iv) active surface composition (eruptions, landslides, evolving landscapes, hazard risks); (v) effects of changing land use on surface energy, water, momentum, and carbon fluxes; and (vi) managing agriculture, natural habitats, water use/quality, and urban development. We review existing algorithms in the following categories: snow/ice, aquatic environments, geology, and terrestrial vegetation, and summarize the community-state-of-practice in each category. This effort synthesizes the findings of more than 130 scientists

    Integrating random forest and crop modeling improves the crop yield prediction of winter wheat and oil seed rape

    Get PDF
    The fast and accurate yield estimates with the increasing availability and variety of global satellite products and the rapid development of new algorithms remain a goal for precision agriculture and food security. However, the consistency and reliability of suitable methodologies that provide accurate crop yield outcomes still need to be explored. The study investigates the coupling of crop modeling and machine learning (ML) to improve the yield prediction of winter wheat (WW) and oil seed rape (OSR) and provides examples for the Free State of Bavaria (70,550 km2), Germany, in 2019. The main objectives are to find whether a coupling approach [Light Use Efficiency (LUE) + Random Forest (RF)] would result in better and more accurate yield predictions compared to results provided with other models not using the LUE. Four different RF models [RF1 (input: Normalized Difference Vegetation Index (NDVI)), RF2 (input: climate variables), RF3 (input: NDVI + climate variables), RF4 (input: LUE generated biomass + climate variables)], and one semi-empiric LUE model were designed with different input requirements to find the best predictors of crop monitoring. The results indicate that the individual use of the NDVI (in RF1) and the climate variables (in RF2) could not be the most accurate, reliable, and precise solution for crop monitoring; however, their combined use (in RF3) resulted in higher accuracies. Notably, the study suggested the coupling of the LUE model variables to the RF4 model can reduce the relative root mean square error (RRMSE) from −8% (WW) and −1.6% (OSR) and increase the R 2 by 14.3% (for both WW and OSR), compared to results just relying on LUE. Moreover, the research compares models yield outputs by inputting three different spatial inputs: Sentinel-2(S)-MOD13Q1 (10 m), Landsat (L)-MOD13Q1 (30 m), and MOD13Q1 (MODIS) (250 m). The S-MOD13Q1 data has relatively improved the performance of models with higher mean R 2 [0.80 (WW), 0.69 (OSR)], and lower RRMSE (%) (9.18, 10.21) compared to L-MOD13Q1 (30 m) and MOD13Q1 (250 m). Satellite-based crop biomass, solar radiation, and temperature are found to be the most influential variables in the yield prediction of both crops

    Modeling wildland fire radiance in synthetic remote sensing scenes

    Get PDF
    This thesis develops a framework for implementing radiometric modeling and visualization of wildland fire. The ability to accurately model physical and op- tical properties of wildfire and burn area in an infrared remote sensing system will assist efforts in phenomenology studies, algorithm development, and sensor evaluation. Synthetic scenes are also needed for a Wildland Fire Dynamic Data Driven Applications Systems (DDDAS) for model feedback and update. A fast approach is presented to predict 3D flame geometry based on real time measured heat flux, fuel loading, and wind speed. 3D flame geometry could realize more realistic radiometry simulation. A Coupled Atmosphere-Fire Model is used to de- rive the parameters of the motion field and simulate fire dynamics and evolution. Broad band target (fire, smoke, and burn scar) spectra are synthesized based on ground measurements and MODTRAN runs. Combining the temporal and spa- tial distribution of fire parameters, along with the target spectra, a physics based model is used to generate radiance scenes depicting what the target might look like as seen by the airborne sensor. Radiance scene rendering of the 3D flame includes 2D hot ground and burn scar cooling, 3D flame direct radiation, and 3D indirect reflected radiation. Fire Radiative Energy (FRE) is a parameter defined from infrared remote sensing data that is applied to determine the radiative energy released during a wildland fire. FRE derived with the Bi-spectral method and the MIR radiance method are applied to verify the fire radiance scene synthesized in this research. The results for the synthetic scenes agree well with published values derived from wildland fire images
    corecore