1,253 research outputs found

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Characterizing Spatiotemporal Patterns of White Mold in Soybean across South Dakota Using Remote Sensing

    Get PDF
    Soybean is among the most important crops, cultivated primarily for beans, which are used for food, feed, and biofuel. According to FAO, the United States was the biggest soybeans producer in 2016. The main soybean producing regions in the United States are the Corn Belt and the lower Mississippi Valley. Despite its importance, soybean production is reduced by several diseases, among which Sclerotinia stem rot, also known as white mold, a fungal disease that is caused by the fungus Sclerotinia sclerotiorum is among the top 10 soybean diseases. The disease may attack several plants and considerably reduce yield. According to previous reports, environmental conditions corresponding to high yield potential are most conducive for white mold development. These conditions include cool temperature (12-24 °C), continued wet and moist conditions (70-120 h) generally resulting from rain, but the disease development requires the presence of a susceptible soybean variety. To better understand white mold development in the field, there is a need to investigate its spatiotemoral characteristics and provide accurate estimates of the damages that white mold may cause. Current and accurate data about white mold are scarce, especially at county or larger scale. Studies that explored the characteristics of white mold were generally field oriented and local in scale, and when the spectral characteristics were investigated, the authors used spectroradiometers that are not accessible to farmers and to the general public and are mostly used for experimental modeling. This study employed free remote sensing Landsat 8 images to quantify white mold in South Dakota. Images acquired in May and July were used to map the land cover and extract the soybean mask, while an image acquired in August was used to map and quantify white mold using the random forest algorithm. The land cover map was produced with an overall accuracy of 95% while white mold was mapped with an overall accuracy of 99%. White mold area estimates were respectively 132 km2, 88 km2, and 190 km2, representing 31%, 22% and 29% of the total soybean area for Marshall, Codington and Day counties. This study also explored the spatial characteristics of white mold in soybean fields and its impact on yield. The yield distribution exhibited a significant positive spatial autocorrelation (Moran’s I = 0.38, p-value \u3c 0.001 for Moody field, Moran’s I = 0.45, p-value \u3c 0.001, for Marshall field) as an evidence of clustering. Significant clusters could be observed in white mold areas (low-low clusters) or in healthy soybeans (high-high clusters). The yield loss caused by the worst white mold was estimated at 36% and 56% respectively for the Moody and the Marshall fields, with the most accurate loss estimation occurring between late August and early September. Finally, this study modeled the temporal evolution of white mold using a logistic regression analysis in which the white mold was modeled as a function of the NDVI. The model was successful, but further improved by the inclusion of the Day of the Year (DOY). The respective areas under the curves (AUC) were 0.95 for NDVI and 0.99 for NDVI+DOY models. A comparison of the NDVI temporal change between different sites showed that white mold temporal development was affected by the site location, which could be influenced by many local parameters such as the soil properties, the local elevation, management practices, or weather parameters. This study showed the importance of freely available remotely sensed satellite images in the estimation of crop disease areas and in the characterization of the spatial and temporal patterns of crop disease; this could help in timely disease damage assessment

    Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow

    Full text link
    We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain to nighttime domain in either input visual space or output motion space. However, this direct adaptation is ineffective, since there exists a large domain gap due to the intrinsic heterogeneous nature of the feature representations between auxiliary and nighttime domains. To overcome this issue, we explore a common-latent space as the intermediate bridge to reinforce the feature alignment between auxiliary and nighttime domains. In this work, we exploit two auxiliary daytime and event domains, and propose a novel common appearance-boundary adaptation framework for nighttime optical flow. In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space. We discover that motion distributions of the two reflectance maps are very similar, benefiting us to consistently transfer motion appearance knowledge from daytime to nighttime domain. In boundary adaptation, we theoretically derive the motion correlation formula between nighttime image and accumulated events within a spatiotemporal gradient-aligned common space. We figure out that the correlation of the two spatiotemporal gradient maps shares significant discrepancy, benefitting us to contrastively transfer boundary knowledge from event to nighttime domain. Moreover, appearance adaptation and boundary adaptation are complementary to each other, since they could jointly transfer global motion and local boundary knowledge to the nighttime domain

    Spatiotemporal Fusion in Remote Sensing

    Get PDF
    Remote sensing images and techniques are powerful tools to investigate earth’s surface. Data quality is the key to enhance remote sensing applications and obtaining clear and noise-free set of data is very difficult in most situations due to the varying acquisition (e.g., atmosphere and season), sensor and platform (e.g., satellite angles and sensor characteristics) conditions. With the increasing development of satellites, nowadays Terabytes of remote sensing images can be acquired every day. Therefore, information and data fusion can be particularly important in the remote sensing community. The fusion integrates data from various sources acquired asynchronously for information extraction, analysis, and quality improvement. In this chapter, we aim to discuss the theory of spatiotemporal fusion by investigating previous works, in addition to describing the basic concepts and some of its applications by summarizing our prior and ongoing works
    corecore