681 research outputs found

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. Β© 2008-2012 IEEE

    μ‹œκ³΅κ°„ 해상도 ν–₯상을 ν†΅ν•œ 식생 λ³€ν™” λͺ¨λ‹ˆν„°λ§

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : ν™˜κ²½λŒ€ν•™μ› ν˜‘λ™κ³Όμ • μ‘°κ²½ν•™, 2023. 2. λ₯˜μ˜λ ¬.μœ‘μƒ μƒνƒœκ³„μ—μ„œ λŒ€κΈ°κΆŒκ³Ό μƒλ¬ΌκΆŒμ˜ μƒν˜Έ μž‘μš©μ„ μ΄ν•΄ν•˜κΈ° μœ„ν•΄μ„œλŠ” 식생 λ³€ν™”μ˜ λͺ¨λ‹ˆν„°λ§μ΄ ν•„μš”ν•˜λ‹€. 이 λ•Œ, μœ„μ„±μ˜μƒμ€ μ§€ν‘œλ©΄μ„ κ΄€μΈ‘ν•˜μ—¬ 식생지도λ₯Ό μ œκ³΅ν•  수 μžˆμ§€λ§Œ, μ§€ν‘œλ³€ν™”μ˜ μƒμ„Έν•œ μ •λ³΄λŠ” κ΅¬λ¦„μ΄λ‚˜ μœ„μ„± μ΄λ―Έμ§€μ˜ 곡간 해상도에 μ˜ν•΄ μ œν•œλ˜μ—ˆλ‹€. λ˜ν•œ μœ„μ„±μ˜μƒμ˜ μ‹œκ³΅κ°„ 해상도가 식생지도λ₯Ό ν†΅ν•œ κ΄‘ν•©μ„± λͺ¨λ‹ˆν„°λ§μ— λ―ΈμΉ˜λŠ” 영ν–₯은 μ™„μ „νžˆ λ°ν˜€μ§€μ§€ μ•Šμ•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 고해상도 식생 지도λ₯Ό μΌλ‹¨μœ„λ‘œ μƒμ„±ν•˜κΈ° μœ„μ„± μ˜μƒμ˜ μ‹œκ³΅κ°„ 해상도λ₯Ό ν–₯μƒμ‹œν‚€λŠ” 것을 λͺ©ν‘œλ‘œ ν•˜μ˜€λ‹€. 고해상도 μœ„μ„±μ˜μƒμ„ ν™œμš©ν•œ 식생 λ³€ν™” λͺ¨λ‹ˆν„°λ§μ„ μ‹œκ³΅κ°„μ μœΌλ‘œ ν™•μž₯ν•˜κΈ° μœ„ν•΄ 1) 정지ꢀ도 μœ„μ„±μ„ ν™œμš©ν•œ μ˜μƒμœ΅ν•©μ„ 톡해 μ‹œκ°„ν•΄μƒλ„ ν–₯상, 2) μ λŒ€μ μƒμ„±λ„€νŠΈμ›Œν¬λ₯Ό ν™œμš©ν•œ 곡간해상도 ν–₯상, 3) μ‹œκ³΅κ°„ν•΄μƒλ„κ°€ 높은 μœ„μ„±μ˜μƒμ„ 토지피볡이 κ· μ§ˆν•˜μ§€ μ•Šμ€ κ³΅κ°„μ—μ„œ 식물 κ΄‘ν•©μ„± λͺ¨λ‹ˆν„°λ§μ„ μˆ˜ν–‰ν•˜μ˜€λ‹€. 이처럼, μœ„μ„±κΈ°λ°˜ μ›κ²©νƒμ§€μ—μ„œ μƒˆλ‘œμš΄ 기술이 λ“±μž₯함에 따라 ν˜„μž¬ 및 과거의 μœ„μ„±μ˜μƒμ€ μ‹œκ³΅κ°„ 해상도 μΈ‘λ©΄μ—μ„œ ν–₯μƒλ˜μ–΄ 식생 λ³€ν™”μ˜ λͺ¨λ‹ˆν„°λ§ ν•  수 μžˆλ‹€. 제2μž₯μ—μ„œλŠ” μ •μ§€κΆ€λ„μœ„μ„±μ˜μƒμ„ ν™œμš©ν•˜λŠ” μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•©μœΌλ‘œ μ‹λ¬Όμ˜ 광합성을 λͺ¨λ‹ˆν„°λ§ ν–ˆμ„ λ•Œ, μ‹œκ°„ν•΄μƒλ„κ°€ ν–₯상됨을 λ³΄μ˜€λ‹€. μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•© μ‹œ, ꡬ름탐지, μ–‘λ°©ν–₯ λ°˜μ‚¬ ν•¨μˆ˜ μ‘°μ •, 곡간 등둝, μ‹œκ³΅κ°„ μœ΅ν•©, μ‹œκ³΅κ°„ 결츑치 보완 λ“±μ˜ 과정을 κ±°μΉœλ‹€. 이 μ˜μƒμœ΅ν•© μ‚°μΆœλ¬Όμ€ κ²½μž‘κ΄€λ¦¬ λ“±μœΌλ‘œ 식생 μ§€μˆ˜μ˜ μ—°κ°„ 변동이 큰 두 μž₯μ†Œ(농경지와 λ‚™μ—½μˆ˜λ¦Ό)μ—μ„œ ν‰κ°€ν•˜μ˜€λ‹€. κ·Έ κ²°κ³Ό, μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•© μ‚°μΆœλ¬Όμ€ 결츑치 없이 ν˜„μž₯관츑을 μ˜ˆμΈ‘ν•˜μ˜€λ‹€ (R2 = 0.71, μƒλŒ€ 편ν–₯ = 5.64% 농경지; R2 = 0.79, μƒλŒ€ 편ν–₯ = -13.8%, ν™œμ—½μˆ˜λ¦Ό). μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•©μ€ 식생 μ§€λ„μ˜ μ‹œκ³΅κ°„ 해상도λ₯Ό μ μ§„μ μœΌλ‘œ κ°œμ„ ν•˜μ—¬, 식물 생μž₯κΈ°λ™μ•ˆ μœ„μ„±μ˜μƒμ΄ ν˜„μž₯ 관츑을 κ³Όμ†Œ 평가λ₯Ό μ€„μ˜€λ‹€. μ˜μƒμœ΅ν•©μ€ 높은 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ κ΄‘ν•©μ„± 지도λ₯Ό μΌκ°„κ²©μœΌλ‘œ μƒμ„±ν•˜κΈ°μ— 이λ₯Ό ν™œμš©ν•˜μ—¬ μœ„μ„± μ˜μƒμ˜ μ œν•œλœ μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ λ°ν˜€μ§€μ§€ μ•Šμ€ μ‹λ¬Όλ³€ν™”μ˜ 과정을 λ°œκ²¬ν•˜κΈΈ κΈ°λŒ€ν•œλ‹€. μ‹μƒμ˜ 곡간뢄포은 정밀농업과 토지 피볡 λ³€ν™” λͺ¨λ‹ˆν„°λ§μ„ μœ„ν•΄ ν•„μˆ˜μ μ΄λ‹€. 고해상도 μœ„μ„±μ˜μƒμœΌλ‘œ 지ꡬ ν‘œλ©΄μ„ κ΄€μΈ‘ν•˜λŠ” 것을 μš©μ΄ν•˜κ²Œ ν•΄μ‘Œλ‹€. 특히 Planet Fusion은 μ΄ˆμ†Œν˜•μœ„μ„±κ΅° 데이터λ₯Ό μ΅œλŒ€ν•œ ν™œμš©ν•΄ 데이터 결츑이 μ—†λŠ” 3m 곡간 ν•΄μƒλ„μ˜ μ§€ν‘œ ν‘œλ©΄ λ°˜μ‚¬λ„μ΄λ‹€. κ·ΈλŸ¬λ‚˜ κ³Όκ±° μœ„μ„± μ„Όμ„œ(Landsat의 경우 30~60m)의 곡간 ν•΄μƒλ„λŠ” μ‹μƒμ˜ 곡간적 λ³€ν™”λ₯Ό 상세 λΆ„μ„ν•˜λŠ” 것을 μ œν•œν–ˆλ‹€. 제3μž₯μ—μ„œλŠ” Landsat λ°μ΄ν„°μ˜ 곡간 해상도λ₯Ό ν–₯μƒν•˜κΈ° μœ„ν•΄ Planet Fusion 및 Landsat 8 데이터λ₯Ό μ‚¬μš©ν•˜μ—¬ 이쀑 μ λŒ€μ  생성 λ„€νŠΈμ›Œν¬(the dual RSS-GAN)λ₯Ό ν•™μŠ΅μ‹œμΌœ, 고해상도 μ •κ·œν™” 식생 μ§€μˆ˜(NDVI)와 식물 근적외선 λ°˜μ‚¬(NIRv)도λ₯Ό μƒμ„±ν•˜λŠ” ν•œλ‹€. νƒ€μ›ŒκΈ°λ°˜ ν˜„μž₯ μ‹μƒμ§€μˆ˜(μ΅œλŒ€ 8λ…„)와 λ“œλ‘ κΈ°λ°˜ μ΄ˆλΆ„κ΄‘μ§€λ„λ‘œ the dual RSS-GAN의 μ„±λŠ₯을 λŒ€ν•œλ―Όκ΅­ λ‚΄ 두 λŒ€μƒμ§€(농경지와 ν™œμ—½μˆ˜λ¦Ό)μ—μ„œ ν‰κ°€ν–ˆλ‹€. The dual RSS-GAN은 Landsat 8 μ˜μƒμ˜ 곡간해상도λ₯Ό ν–₯μƒμ‹œμΌœ 곡간 ν‘œν˜„μ„ λ³΄μ™„ν•˜κ³  식생 μ§€μˆ˜μ˜ κ³„μ ˆμ  λ³€ν™”λ₯Ό ν¬μ°©ν–ˆλ‹€(R2> 0.96). 그리고 the dual RSS-GAN은 Landsat 8 식생 μ§€μˆ˜κ°€ ν˜„μž₯에 λΉ„ν•΄ κ³Όμ†Œ ν‰κ°€λ˜λŠ” 것을 μ™„ν™”ν–ˆλ‹€. ν˜„μž₯ 관츑에 λΉ„ν•΄ 이쀑 RSS-GANκ³Ό Landsat 8의 μƒλŒ€ 편ν–₯ κ°’ 각각 -0.8% μ—μ„œ -1.5%, -10.3% μ—μ„œ -4.6% μ˜€λ‹€. μ΄λŸ¬ν•œ κ°œμ„ μ€ Planet Fusion의 곡간정보λ₯Ό 이쀑 RSS-GAN둜 ν•™μŠ΅ν•˜μ˜€κΈ°μ— κ°€λŠ₯ν–ˆλ‹€. ν—€λ‹Ή 연ꡬ κ²°κ³ΌλŠ” Landsat μ˜μƒμ˜ 곡간 해상도λ₯Ό ν–₯μƒμ‹œμΌœ μˆ¨κ²¨μ§„ 곡간 정보λ₯Ό μ œκ³΅ν•˜λŠ” μƒˆλ‘œμš΄ μ ‘κ·Ό 방식이닀. κ³ ν•΄μƒλ„μ—μ„œ 식물 κ΄‘ν•©μ„± μ§€λ„λŠ” 토지피볡이 λ³΅μž‘ν•œ κ³΅κ°„μ—μ„œ νƒ„μ†Œ μˆœν™˜ λͺ¨λ‹ˆν„°λ§μ‹œ ν•„μˆ˜μ μ΄λ‹€. κ·ΈλŸ¬λ‚˜ Sentinel-2, Landsat 및 MODIS와 같이 νƒœμ–‘ 동쑰 ꢀ도에 μžˆλŠ” μœ„μ„±μ€ 곡간 해상도가 λ†’κ±°λ‚˜ μ‹œκ°„ 해상도 높은 μœ„μ„±μ˜μƒλ§Œ μ œκ³΅ν•  수 μžˆλ‹€. 졜근 λ°œμ‚¬λœ μ΄ˆμ†Œν˜•μœ„μ„±κ΅°μ€ μ΄λŸ¬ν•œ 해상도 ν•œκ³„μ„ 극볡할 수 μžˆλ‹€. 특히 Planet Fusion은 μ΄ˆμ†Œν˜•μœ„μ„± 자료의 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ μ§€ν‘œλ©΄μ„ κ΄€μΈ‘ν•  수 μžˆλ‹€. 4μž₯μ—μ„œ, Planet Fusion μ§€ν‘œλ°˜μ‚¬λ„λ₯Ό μ΄μš©ν•˜μ—¬ μ‹μƒμ—μ„œ λ°˜μ‚¬λœ 근적외선 볡사(NIRvP)λ₯Ό 3m 해상도 지도λ₯Ό μΌκ°„κ²©μœΌλ‘œ μƒμ„±ν–ˆλ‹€. 그런 λ‹€μŒ λ―Έκ΅­ μΊ˜λ¦¬ν¬λ‹ˆμ•„μ£Ό μƒˆν¬λΌλ©˜ν† -μƒŒ ν˜Έμ•„ν‚¨ λΈνƒ€μ˜ ν”ŒλŸ­μŠ€ νƒ€μ›Œ λ„€νŠΈμ›Œν¬ 데이터와 λΉ„κ΅ν•˜μ—¬ 식물 광합성을 μΆ”μ •ν•˜κΈ° μœ„ν•œ NIRvP μ§€λ„μ˜ μ„±λŠ₯을 ν‰κ°€ν•˜μ˜€λ‹€. μ „μ²΄μ μœΌλ‘œ NIRvP μ§€λ„λŠ” μŠ΅μ§€μ˜ μž¦μ€ μˆ˜μœ„ 변화에도 λΆˆκ΅¬ν•˜κ³  κ°œλ³„ λŒ€μƒμ§€μ˜ 식물 κ΄‘ν•©μ„±μ˜ μ‹œκ°„μ  λ³€ν™”λ₯Ό ν¬μ°©ν•˜μ˜€λ‹€. κ·ΈλŸ¬λ‚˜ λŒ€μƒμ§€ 전체에 λŒ€ν•œ NIRvP 지도와 식물 κ΄‘ν•©μ„± μ‚¬μ΄μ˜ κ΄€κ³„λŠ” NIRvP 지도λ₯Ό ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„μ™€ μΌμΉ˜μ‹œν‚¬ λ•Œλ§Œ 높은 상관관계λ₯Ό λ³΄μ˜€λ‹€. κ΄€μΈ‘λ²”μœ„λ₯Ό μΌμΉ˜μ‹œν‚¬ 경우, NIRvP μ§€λ„λŠ” 식물 광합성을 μΆ”μ •ν•˜λŠ” 데 μžˆμ–΄ ν˜„μž₯ NIRvP보닀 μš°μˆ˜ν•œ μ„±λŠ₯을 λ³΄μ˜€λ‹€. μ΄λŸ¬ν•œ μ„±λŠ₯ μ°¨μ΄λŠ” ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„λ₯Ό μΌμΉ˜μ‹œν‚¬ λ•Œ, 연ꡬ λŒ€μƒμ§€ κ°„μ˜ NIRvP-식물 κ΄‘ν•©μ„± κ΄€κ³„μ˜ κΈ°μšΈκΈ°κ°€ 일관성을 λ³΄μ˜€κΈ° λ•Œλ¬Έμ΄λ‹€. λ³Έ 연ꡬ κ²°κ³ΌλŠ” μœ„μ„± 관츑을 ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„μ™€ μΌμΉ˜μ‹œν‚€λŠ” κ²ƒμ˜ μ€‘μš”μ„±μ„ 보여주고 높은 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ 식물 광합성을 μ›κ²©μœΌλ‘œ λͺ¨λ‹ˆν„°λ§ν•˜λŠ” μ΄ˆμ†Œν˜•μœ„μ„±κ΅° 자료의 잠재λ ₯을 보여쀀닀.Monitoring changes in terrestrial vegetation is essential to understanding interactions between atmosphere and biosphere, especially terrestrial ecosystem. To this end, satellite remote sensing offer maps for examining land surface in different scales. However, the detailed information was hindered under the clouds or limited by the spatial resolution of satellite imagery. Moreover, the impacts of spatial and temporal resolution in photosynthesis monitoring were not fully revealed. In this dissertation, I aimed to enhance the spatial and temporal resolution of satellite imagery towards daily gap-free vegetation maps with high spatial resolution. In order to expand vegetation change monitoring in time and space using high-resolution satellite images, I 1) improved temporal resolution of satellite dataset through image fusion using geostationary satellites, 2) improved spatial resolution of satellite dataset using generative adversarial networks, and 3) showed the use of high spatiotemporal resolution maps for monitoring plant photosynthesis especially over heterogeneous landscapes. With the advent of new techniques in satellite remote sensing, current and past datasets can be fully utilized for monitoring vegetation changes in the respect of spatial and temporal resolution. In Chapter 2, I developed the integrated system that implemented geostationary satellite products in the spatiotemporal image fusion method for monitoring canopy photosynthesis. The integrated system contains the series of process (i.e., cloud masking, nadir bidirectional reflectance function adjustment, spatial registration, spatiotemporal image fusion, spatial gap-filling, temporal-gap-filling). I conducted the evaluation of the integrated system over heterogeneous rice paddy landscape where the drastic land cover changes were caused by cultivation management and deciduous forest where consecutive changes occurred in time. The results showed that the integrated system well predict in situ measurements without data gaps (R2 = 0.71, relative bias = 5.64% at rice paddy site; R2 = 0.79, relative bias = -13.8% at deciduous forest site). The integrated system gradually improved the spatiotemporal resolution of vegetation maps, reducing the underestimation of in situ measurements, especially during peak growing season. Since the integrated system generates daily canopy photosynthesis maps for monitoring dynamics among regions of interest worldwide with high spatial resolution. I anticipate future efforts to reveal the hindered information by the limited spatial and temporal resolution of satellite imagery. Detailed spatial representations of terrestrial vegetation are essential for precision agricultural applications and the monitoring of land cover changes in heterogeneous landscapes. The advent of satellite-based remote sensing has facilitated daily observations of the Earths surface with high spatial resolution. In particular, a data fusion product such as Planet Fusion has realized the delivery of daily, gap-free surface reflectance data with 3-m pixel resolution through full utilization of relatively recent (i.e., 2018-) CubeSat constellation data. However, the spatial resolution of past satellite sensors (i.e., 30–60 m for Landsat) has restricted the detailed spatial analysis of past changes in vegetation. In Chapter 3, to overcome the spatial resolution constraint of Landsat data for long-term vegetation monitoring, we propose a dual remote-sensing super-resolution generative adversarial network (dual RSS-GAN) combining Planet Fusion and Landsat 8 data to simulate spatially enhanced long-term time-series of the normalized difference vegetation index (NDVI) and near-infrared reflectance from vegetation (NIRv). We evaluated the performance of the dual RSS-GAN against in situ tower-based continuous measurements (up to 8 years) and remotely piloted aerial system-based maps of cropland and deciduous forest in the Republic of Korea. The dual RSS-GAN enhanced spatial representations in Landsat 8 images and captured seasonal variation in vegetation indices (R2 > 0.95, for the dual RSS-GAN maps vs. in situ data from all sites). Overall, the dual RSS-GAN reduced Landsat 8 vegetation index underestimations compared with in situ measurements; relative bias values of NDVI ranged from βˆ’3.2% to 1.2% and βˆ’12.4% to βˆ’3.7% for the dual RSS-GAN and Landsat 8, respectively. This improvement was caused by spatial enhancement through the dual RSS-GAN, which captured fine-scale information from Planet Fusion. This study presents a new approach for the restoration of hidden sub-pixel spatial information in Landsat images. Mapping canopy photosynthesis in both high spatial and temporal resolution is essential for carbon cycle monitoring in heterogeneous areas. However, well established satellites in sun-synchronous orbits such as Sentinel-2, Landsat and MODIS can only provide either high spatial or high temporal resolution but not both. Recently established CubeSat satellite constellations have created an opportunity to overcome this resolution trade-off. In particular, Planet Fusion allows full utilization of the CubeSat data resolution and coverage while maintaining high radiometric quality. In Chapter 4, I used the Planet Fusion surface reflectance product to calculate daily, 3-m resolution, gap-free maps of the near-infrared radiation reflected from vegetation (NIRvP). I then evaluated the performance of these NIRvP maps for estimating canopy photosynthesis by comparing with data from a flux tower network in Sacramento-San Joaquin Delta, California, USA. Overall, NIRvP maps captured temporal variations in canopy photosynthesis of individual sites, despite changes in water extent in the wetlands and frequent mowing in the crop fields. When combining data from all sites, however, I found that robust agreement between NIRvP maps and canopy photosynthesis could only be achieved when matching NIRvP maps to the flux tower footprints. In this case of matched footprints, NIRvP maps showed considerably better performance than in situ NIRvP in estimating canopy photosynthesis both for daily sum and data around the time of satellite overpass (R2 = 0.78 vs. 0.60, for maps vs. in situ for the satellite overpass time case). This difference in performance was mostly due to the higher degree of consistency in slopes of NIRvP-canopy photosynthesis relationships across the study sites for flux tower footprint-matched maps. Our results show the importance of matching satellite observations to the flux tower footprint and demonstrate the potential of CubeSat constellation imagery to monitor canopy photosynthesis remotely at high spatio-temporal resolution.Chapter 1. Introduction 2 1. Background 2 1.1 Daily gap-free surface reflectance using geostationary satellite products 2 1.2 Monitoring past vegetation changes with high-spatial-resolution 3 1.3 High spatiotemporal resolution vegetation photosynthesis maps 4 2. Purpose of Research 4 Chapter 2. Generating daily gap-filled BRDF adjusted surface reflectance product at 10 m resolution using geostationary satellite product for monitoring daily canopy photosynthesis 6 1. Introduction 6 2. Methods 11 2.1 Study sites 11 2.2 In situ measurements 13 2.3 Satellite products 14 2.4 Integrated system 17 2.5 Canopy photosynthesis 21 2.6 Evaluation 23 3. Results and discussion 24 3.1 Comparison of STIF NDVI and NIRv with in situ NDVI and NIRv 24 3.2 Comparison of STIF NIRvP with in situ NIRvP 28 4. Conclusion 31 Chapter 3. Super-resolution of historic Landsat imagery using a dual Generative Adversarial Network (GAN) model with CubeSat constellation imagery for monitoring vegetation changes 32 1. Introduction 32 2. Methods 38 2.1 Real-ESRGAN model 38 2.2 Study sites 40 2.3 In situ measurements 42 2.4 Vegetation index 44 2.5 Satellite data 45 2.6 Planet Fusion 48 2.7 Dual RSS-GAN via fine-tuned Real-ESRGAN 49 2.8 Evaluation 54 3. Results 57 3.1 Comparison of NDVI and NIRv maps from Planet Fusion, Sentinel 2 NBAR, and Landsat 8 NBAR data with in situ NDVI and NIRv 57 3.2 Comparison of dual RSS-SRGAN model results with Landsat 8 NDVI and NIRv 60 3.3 Comparison of dual RSS-GAN model results with respect to in situ time-series NDVI and NIRv 63 3.4 Comparison of the dual RSS-GAN model with NDVI and NIRv maps derived from RPAS 66 4. Discussion 70 4.1 Monitoring changes in terrestrial vegetation using the dual RSS-GAN model 70 4.2 CubeSat data in the dual RSS-GAN model 72 4.3 Perspectives and limitations 73 5. Conclusion 78 Appendices 79 Supplementary material 82 Chapter 4. Matching high resolution satellite data and flux tower footprints improves their agreement in photosynthesis estimates 85 1. Introduction 85 2. Methods 89 2.1 Study sites 89 2.2 In situ measurements 92 2.3 Planet Fusion NIRvP 94 2.4 Flux footprint model 98 2.5 Evaluation 98 3. Results 105 3.1 Comparison of Planet Fusion NIRv and NIRvP with in situ NIRv and NIRvP 105 3.2 Comparison of instantaneous Planet Fusion NIRv and NIRvP with against tower GPP estimates 108 3.3 Daily GPP estimation from Planet Fusion -derived NIRvP 114 4. Discussion 118 4.1 Flux tower footprint matching and effects of spatial and temporal resolution on GPP estimation 118 4.2 Roles of radiation component in GPP mapping 123 4.3 Limitations and perspectives 126 5. Conclusion 133 Appendix 135 Supplementary Materials 144 Chapter 5. Conclusion 153 Bibliography 155 Abstract in Korea 199 Acknowledgements 202λ°•

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Spatiotemporal Data Augmentation of MODIS-LANDSAT Water Bodies Using Generative Adversarial Networks

    Get PDF
    The monitoring of the shape and area of a water body is an essential component for many Earth science and Hydrological applications. For this purpose, these applications require remote sensing data which provides accurate analysis of the water bodies. In this thesis the same is being attempted, first, a model is created that can map the information from one kind of satellite that captures the data from a distance of 500m to another data that is captured by a different satellite at a distance of 30m. To achieve this, we first collected the data from both of the satellites and translated the data from one satellite to another using our proposed Hydro-GAN model. This translation gives us the accurate shape, boundary, and area of the water body. We evaluated the method by using several different similarity metrics for the area and the shape of the water body. The second part of this thesis involves augmenting the data that we obtained from the Hydro-GAN model with the original data and using this enriched data to predict the area of a water body in the future. We used the case study of Great Salt lake for this purpose. The results indicated that our proposed model was creating accurate area and shape of the water bodies. When we used our proposed model to generate data at a resolution of 30m it gave us better areal and shape accuracy. If we get more data at this resolution, we can use that data to better predict coastal lines, boundaries, as well as erosion monitoring

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Spatiotemporal Fusion of Land Surface Temperature Based on a Convolutional Neural Network

    Get PDF
    Β© 1980-2012 IEEE. Due to the tradeoff between spatial and temporal resolutions commonly encountered in remote sensing, no single satellite sensor can provide fine spatial resolution land surface temperature (LST) products with frequent coverage. This situation greatly limits applications that require LST data with fine spatiotemporal resolution. Here, a deep learning-based spatiotemporal temperature fusion network (STTFN) method for the generation of fine spatiotemporal resolution LST products is proposed. In STTFN, a multiscale fusion convolutional neural network is employed to build the complex nonlinear relationship between input and output LSTs. Thus, unlike other LST spatiotemporal fusion approaches, STTFN is able to form the potentially complicated relationships through the use of training data without manually designed mathematical rules making it is more flexible and intelligent than other methods. In addition, two target fine spatial resolution LST images are predicted and then integrated by a spatiotemporal-consistency (STC)-weighting function to take advantage of STC of LST data. A set of analyses using two real LST data sets obtained from Landsat and moderate resolution imaging spectroradiometer (MODIS) were undertaken to evaluate the ability of STTFN to generate fine spatiotemporal resolution LST products. The results show that, compared with three classic fusion methods [the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), the spatiotemporal integrated temperature fusion model (STITFM), and the two-stream convolutional neural network for spatiotemporal image fusion (StfNet)], the proposed network produced the most accurate outputs [average root mean square error (RMSE) 0.971]
    • …
    corecore