104 research outputs found

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    μ‹œκ³΅κ°„ 해상도 ν–₯상을 ν†΅ν•œ 식생 λ³€ν™” λͺ¨λ‹ˆν„°λ§

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : ν™˜κ²½λŒ€ν•™μ› ν˜‘λ™κ³Όμ • μ‘°κ²½ν•™, 2023. 2. λ₯˜μ˜λ ¬.μœ‘μƒ μƒνƒœκ³„μ—μ„œ λŒ€κΈ°κΆŒκ³Ό μƒλ¬ΌκΆŒμ˜ μƒν˜Έ μž‘μš©μ„ μ΄ν•΄ν•˜κΈ° μœ„ν•΄μ„œλŠ” 식생 λ³€ν™”μ˜ λͺ¨λ‹ˆν„°λ§μ΄ ν•„μš”ν•˜λ‹€. 이 λ•Œ, μœ„μ„±μ˜μƒμ€ μ§€ν‘œλ©΄μ„ κ΄€μΈ‘ν•˜μ—¬ 식생지도λ₯Ό μ œκ³΅ν•  수 μžˆμ§€λ§Œ, μ§€ν‘œλ³€ν™”μ˜ μƒμ„Έν•œ μ •λ³΄λŠ” κ΅¬λ¦„μ΄λ‚˜ μœ„μ„± μ΄λ―Έμ§€μ˜ 곡간 해상도에 μ˜ν•΄ μ œν•œλ˜μ—ˆλ‹€. λ˜ν•œ μœ„μ„±μ˜μƒμ˜ μ‹œκ³΅κ°„ 해상도가 식생지도λ₯Ό ν†΅ν•œ κ΄‘ν•©μ„± λͺ¨λ‹ˆν„°λ§μ— λ―ΈμΉ˜λŠ” 영ν–₯은 μ™„μ „νžˆ λ°ν˜€μ§€μ§€ μ•Šμ•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 고해상도 식생 지도λ₯Ό μΌλ‹¨μœ„λ‘œ μƒμ„±ν•˜κΈ° μœ„μ„± μ˜μƒμ˜ μ‹œκ³΅κ°„ 해상도λ₯Ό ν–₯μƒμ‹œν‚€λŠ” 것을 λͺ©ν‘œλ‘œ ν•˜μ˜€λ‹€. 고해상도 μœ„μ„±μ˜μƒμ„ ν™œμš©ν•œ 식생 λ³€ν™” λͺ¨λ‹ˆν„°λ§μ„ μ‹œκ³΅κ°„μ μœΌλ‘œ ν™•μž₯ν•˜κΈ° μœ„ν•΄ 1) 정지ꢀ도 μœ„μ„±μ„ ν™œμš©ν•œ μ˜μƒμœ΅ν•©μ„ 톡해 μ‹œκ°„ν•΄μƒλ„ ν–₯상, 2) μ λŒ€μ μƒμ„±λ„€νŠΈμ›Œν¬λ₯Ό ν™œμš©ν•œ 곡간해상도 ν–₯상, 3) μ‹œκ³΅κ°„ν•΄μƒλ„κ°€ 높은 μœ„μ„±μ˜μƒμ„ 토지피볡이 κ· μ§ˆν•˜μ§€ μ•Šμ€ κ³΅κ°„μ—μ„œ 식물 κ΄‘ν•©μ„± λͺ¨λ‹ˆν„°λ§μ„ μˆ˜ν–‰ν•˜μ˜€λ‹€. 이처럼, μœ„μ„±κΈ°λ°˜ μ›κ²©νƒμ§€μ—μ„œ μƒˆλ‘œμš΄ 기술이 λ“±μž₯함에 따라 ν˜„μž¬ 및 과거의 μœ„μ„±μ˜μƒμ€ μ‹œκ³΅κ°„ 해상도 μΈ‘λ©΄μ—μ„œ ν–₯μƒλ˜μ–΄ 식생 λ³€ν™”μ˜ λͺ¨λ‹ˆν„°λ§ ν•  수 μžˆλ‹€. 제2μž₯μ—μ„œλŠ” μ •μ§€κΆ€λ„μœ„μ„±μ˜μƒμ„ ν™œμš©ν•˜λŠ” μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•©μœΌλ‘œ μ‹λ¬Όμ˜ 광합성을 λͺ¨λ‹ˆν„°λ§ ν–ˆμ„ λ•Œ, μ‹œκ°„ν•΄μƒλ„κ°€ ν–₯상됨을 λ³΄μ˜€λ‹€. μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•© μ‹œ, ꡬ름탐지, μ–‘λ°©ν–₯ λ°˜μ‚¬ ν•¨μˆ˜ μ‘°μ •, 곡간 등둝, μ‹œκ³΅κ°„ μœ΅ν•©, μ‹œκ³΅κ°„ 결츑치 보완 λ“±μ˜ 과정을 κ±°μΉœλ‹€. 이 μ˜μƒμœ΅ν•© μ‚°μΆœλ¬Όμ€ κ²½μž‘κ΄€λ¦¬ λ“±μœΌλ‘œ 식생 μ§€μˆ˜μ˜ μ—°κ°„ 변동이 큰 두 μž₯μ†Œ(농경지와 λ‚™μ—½μˆ˜λ¦Ό)μ—μ„œ ν‰κ°€ν•˜μ˜€λ‹€. κ·Έ κ²°κ³Ό, μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•© μ‚°μΆœλ¬Όμ€ 결츑치 없이 ν˜„μž₯관츑을 μ˜ˆμΈ‘ν•˜μ˜€λ‹€ (R2 = 0.71, μƒλŒ€ 편ν–₯ = 5.64% 농경지; R2 = 0.79, μƒλŒ€ 편ν–₯ = -13.8%, ν™œμ—½μˆ˜λ¦Ό). μ‹œκ³΅κ°„ μ˜μƒμœ΅ν•©μ€ 식생 μ§€λ„μ˜ μ‹œκ³΅κ°„ 해상도λ₯Ό μ μ§„μ μœΌλ‘œ κ°œμ„ ν•˜μ—¬, 식물 생μž₯κΈ°λ™μ•ˆ μœ„μ„±μ˜μƒμ΄ ν˜„μž₯ 관츑을 κ³Όμ†Œ 평가λ₯Ό μ€„μ˜€λ‹€. μ˜μƒμœ΅ν•©μ€ 높은 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ κ΄‘ν•©μ„± 지도λ₯Ό μΌκ°„κ²©μœΌλ‘œ μƒμ„±ν•˜κΈ°μ— 이λ₯Ό ν™œμš©ν•˜μ—¬ μœ„μ„± μ˜μƒμ˜ μ œν•œλœ μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ λ°ν˜€μ§€μ§€ μ•Šμ€ μ‹λ¬Όλ³€ν™”μ˜ 과정을 λ°œκ²¬ν•˜κΈΈ κΈ°λŒ€ν•œλ‹€. μ‹μƒμ˜ 곡간뢄포은 정밀농업과 토지 피볡 λ³€ν™” λͺ¨λ‹ˆν„°λ§μ„ μœ„ν•΄ ν•„μˆ˜μ μ΄λ‹€. 고해상도 μœ„μ„±μ˜μƒμœΌλ‘œ 지ꡬ ν‘œλ©΄μ„ κ΄€μΈ‘ν•˜λŠ” 것을 μš©μ΄ν•˜κ²Œ ν•΄μ‘Œλ‹€. 특히 Planet Fusion은 μ΄ˆμ†Œν˜•μœ„μ„±κ΅° 데이터λ₯Ό μ΅œλŒ€ν•œ ν™œμš©ν•΄ 데이터 결츑이 μ—†λŠ” 3m 곡간 ν•΄μƒλ„μ˜ μ§€ν‘œ ν‘œλ©΄ λ°˜μ‚¬λ„μ΄λ‹€. κ·ΈλŸ¬λ‚˜ κ³Όκ±° μœ„μ„± μ„Όμ„œ(Landsat의 경우 30~60m)의 곡간 ν•΄μƒλ„λŠ” μ‹μƒμ˜ 곡간적 λ³€ν™”λ₯Ό 상세 λΆ„μ„ν•˜λŠ” 것을 μ œν•œν–ˆλ‹€. 제3μž₯μ—μ„œλŠ” Landsat λ°μ΄ν„°μ˜ 곡간 해상도λ₯Ό ν–₯μƒν•˜κΈ° μœ„ν•΄ Planet Fusion 및 Landsat 8 데이터λ₯Ό μ‚¬μš©ν•˜μ—¬ 이쀑 μ λŒ€μ  생성 λ„€νŠΈμ›Œν¬(the dual RSS-GAN)λ₯Ό ν•™μŠ΅μ‹œμΌœ, 고해상도 μ •κ·œν™” 식생 μ§€μˆ˜(NDVI)와 식물 근적외선 λ°˜μ‚¬(NIRv)도λ₯Ό μƒμ„±ν•˜λŠ” ν•œλ‹€. νƒ€μ›ŒκΈ°λ°˜ ν˜„μž₯ μ‹μƒμ§€μˆ˜(μ΅œλŒ€ 8λ…„)와 λ“œλ‘ κΈ°λ°˜ μ΄ˆλΆ„κ΄‘μ§€λ„λ‘œ the dual RSS-GAN의 μ„±λŠ₯을 λŒ€ν•œλ―Όκ΅­ λ‚΄ 두 λŒ€μƒμ§€(농경지와 ν™œμ—½μˆ˜λ¦Ό)μ—μ„œ ν‰κ°€ν–ˆλ‹€. The dual RSS-GAN은 Landsat 8 μ˜μƒμ˜ 곡간해상도λ₯Ό ν–₯μƒμ‹œμΌœ 곡간 ν‘œν˜„μ„ λ³΄μ™„ν•˜κ³  식생 μ§€μˆ˜μ˜ κ³„μ ˆμ  λ³€ν™”λ₯Ό ν¬μ°©ν–ˆλ‹€(R2> 0.96). 그리고 the dual RSS-GAN은 Landsat 8 식생 μ§€μˆ˜κ°€ ν˜„μž₯에 λΉ„ν•΄ κ³Όμ†Œ ν‰κ°€λ˜λŠ” 것을 μ™„ν™”ν–ˆλ‹€. ν˜„μž₯ 관츑에 λΉ„ν•΄ 이쀑 RSS-GANκ³Ό Landsat 8의 μƒλŒ€ 편ν–₯ κ°’ 각각 -0.8% μ—μ„œ -1.5%, -10.3% μ—μ„œ -4.6% μ˜€λ‹€. μ΄λŸ¬ν•œ κ°œμ„ μ€ Planet Fusion의 곡간정보λ₯Ό 이쀑 RSS-GAN둜 ν•™μŠ΅ν•˜μ˜€κΈ°μ— κ°€λŠ₯ν–ˆλ‹€. ν—€λ‹Ή 연ꡬ κ²°κ³ΌλŠ” Landsat μ˜μƒμ˜ 곡간 해상도λ₯Ό ν–₯μƒμ‹œμΌœ μˆ¨κ²¨μ§„ 곡간 정보λ₯Ό μ œκ³΅ν•˜λŠ” μƒˆλ‘œμš΄ μ ‘κ·Ό 방식이닀. κ³ ν•΄μƒλ„μ—μ„œ 식물 κ΄‘ν•©μ„± μ§€λ„λŠ” 토지피볡이 λ³΅μž‘ν•œ κ³΅κ°„μ—μ„œ νƒ„μ†Œ μˆœν™˜ λͺ¨λ‹ˆν„°λ§μ‹œ ν•„μˆ˜μ μ΄λ‹€. κ·ΈλŸ¬λ‚˜ Sentinel-2, Landsat 및 MODIS와 같이 νƒœμ–‘ 동쑰 ꢀ도에 μžˆλŠ” μœ„μ„±μ€ 곡간 해상도가 λ†’κ±°λ‚˜ μ‹œκ°„ 해상도 높은 μœ„μ„±μ˜μƒλ§Œ μ œκ³΅ν•  수 μžˆλ‹€. 졜근 λ°œμ‚¬λœ μ΄ˆμ†Œν˜•μœ„μ„±κ΅°μ€ μ΄λŸ¬ν•œ 해상도 ν•œκ³„μ„ 극볡할 수 μžˆλ‹€. 특히 Planet Fusion은 μ΄ˆμ†Œν˜•μœ„μ„± 자료의 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ μ§€ν‘œλ©΄μ„ κ΄€μΈ‘ν•  수 μžˆλ‹€. 4μž₯μ—μ„œ, Planet Fusion μ§€ν‘œλ°˜μ‚¬λ„λ₯Ό μ΄μš©ν•˜μ—¬ μ‹μƒμ—μ„œ λ°˜μ‚¬λœ 근적외선 볡사(NIRvP)λ₯Ό 3m 해상도 지도λ₯Ό μΌκ°„κ²©μœΌλ‘œ μƒμ„±ν–ˆλ‹€. 그런 λ‹€μŒ λ―Έκ΅­ μΊ˜λ¦¬ν¬λ‹ˆμ•„μ£Ό μƒˆν¬λΌλ©˜ν† -μƒŒ ν˜Έμ•„ν‚¨ λΈνƒ€μ˜ ν”ŒλŸ­μŠ€ νƒ€μ›Œ λ„€νŠΈμ›Œν¬ 데이터와 λΉ„κ΅ν•˜μ—¬ 식물 광합성을 μΆ”μ •ν•˜κΈ° μœ„ν•œ NIRvP μ§€λ„μ˜ μ„±λŠ₯을 ν‰κ°€ν•˜μ˜€λ‹€. μ „μ²΄μ μœΌλ‘œ NIRvP μ§€λ„λŠ” μŠ΅μ§€μ˜ μž¦μ€ μˆ˜μœ„ 변화에도 λΆˆκ΅¬ν•˜κ³  κ°œλ³„ λŒ€μƒμ§€μ˜ 식물 κ΄‘ν•©μ„±μ˜ μ‹œκ°„μ  λ³€ν™”λ₯Ό ν¬μ°©ν•˜μ˜€λ‹€. κ·ΈλŸ¬λ‚˜ λŒ€μƒμ§€ 전체에 λŒ€ν•œ NIRvP 지도와 식물 κ΄‘ν•©μ„± μ‚¬μ΄μ˜ κ΄€κ³„λŠ” NIRvP 지도λ₯Ό ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„μ™€ μΌμΉ˜μ‹œν‚¬ λ•Œλ§Œ 높은 상관관계λ₯Ό λ³΄μ˜€λ‹€. κ΄€μΈ‘λ²”μœ„λ₯Ό μΌμΉ˜μ‹œν‚¬ 경우, NIRvP μ§€λ„λŠ” 식물 광합성을 μΆ”μ •ν•˜λŠ” 데 μžˆμ–΄ ν˜„μž₯ NIRvP보닀 μš°μˆ˜ν•œ μ„±λŠ₯을 λ³΄μ˜€λ‹€. μ΄λŸ¬ν•œ μ„±λŠ₯ μ°¨μ΄λŠ” ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„λ₯Ό μΌμΉ˜μ‹œν‚¬ λ•Œ, 연ꡬ λŒ€μƒμ§€ κ°„μ˜ NIRvP-식물 κ΄‘ν•©μ„± κ΄€κ³„μ˜ κΈ°μšΈκΈ°κ°€ 일관성을 λ³΄μ˜€κΈ° λ•Œλ¬Έμ΄λ‹€. λ³Έ 연ꡬ κ²°κ³ΌλŠ” μœ„μ„± 관츑을 ν”ŒλŸ­μŠ€ νƒ€μ›Œ κ΄€μΈ‘λ²”μœ„μ™€ μΌμΉ˜μ‹œν‚€λŠ” κ²ƒμ˜ μ€‘μš”μ„±μ„ 보여주고 높은 μ‹œκ³΅κ°„ ν•΄μƒλ„λ‘œ 식물 광합성을 μ›κ²©μœΌλ‘œ λͺ¨λ‹ˆν„°λ§ν•˜λŠ” μ΄ˆμ†Œν˜•μœ„μ„±κ΅° 자료의 잠재λ ₯을 보여쀀닀.Monitoring changes in terrestrial vegetation is essential to understanding interactions between atmosphere and biosphere, especially terrestrial ecosystem. To this end, satellite remote sensing offer maps for examining land surface in different scales. However, the detailed information was hindered under the clouds or limited by the spatial resolution of satellite imagery. Moreover, the impacts of spatial and temporal resolution in photosynthesis monitoring were not fully revealed. In this dissertation, I aimed to enhance the spatial and temporal resolution of satellite imagery towards daily gap-free vegetation maps with high spatial resolution. In order to expand vegetation change monitoring in time and space using high-resolution satellite images, I 1) improved temporal resolution of satellite dataset through image fusion using geostationary satellites, 2) improved spatial resolution of satellite dataset using generative adversarial networks, and 3) showed the use of high spatiotemporal resolution maps for monitoring plant photosynthesis especially over heterogeneous landscapes. With the advent of new techniques in satellite remote sensing, current and past datasets can be fully utilized for monitoring vegetation changes in the respect of spatial and temporal resolution. In Chapter 2, I developed the integrated system that implemented geostationary satellite products in the spatiotemporal image fusion method for monitoring canopy photosynthesis. The integrated system contains the series of process (i.e., cloud masking, nadir bidirectional reflectance function adjustment, spatial registration, spatiotemporal image fusion, spatial gap-filling, temporal-gap-filling). I conducted the evaluation of the integrated system over heterogeneous rice paddy landscape where the drastic land cover changes were caused by cultivation management and deciduous forest where consecutive changes occurred in time. The results showed that the integrated system well predict in situ measurements without data gaps (R2 = 0.71, relative bias = 5.64% at rice paddy site; R2 = 0.79, relative bias = -13.8% at deciduous forest site). The integrated system gradually improved the spatiotemporal resolution of vegetation maps, reducing the underestimation of in situ measurements, especially during peak growing season. Since the integrated system generates daily canopy photosynthesis maps for monitoring dynamics among regions of interest worldwide with high spatial resolution. I anticipate future efforts to reveal the hindered information by the limited spatial and temporal resolution of satellite imagery. Detailed spatial representations of terrestrial vegetation are essential for precision agricultural applications and the monitoring of land cover changes in heterogeneous landscapes. The advent of satellite-based remote sensing has facilitated daily observations of the Earths surface with high spatial resolution. In particular, a data fusion product such as Planet Fusion has realized the delivery of daily, gap-free surface reflectance data with 3-m pixel resolution through full utilization of relatively recent (i.e., 2018-) CubeSat constellation data. However, the spatial resolution of past satellite sensors (i.e., 30–60 m for Landsat) has restricted the detailed spatial analysis of past changes in vegetation. In Chapter 3, to overcome the spatial resolution constraint of Landsat data for long-term vegetation monitoring, we propose a dual remote-sensing super-resolution generative adversarial network (dual RSS-GAN) combining Planet Fusion and Landsat 8 data to simulate spatially enhanced long-term time-series of the normalized difference vegetation index (NDVI) and near-infrared reflectance from vegetation (NIRv). We evaluated the performance of the dual RSS-GAN against in situ tower-based continuous measurements (up to 8 years) and remotely piloted aerial system-based maps of cropland and deciduous forest in the Republic of Korea. The dual RSS-GAN enhanced spatial representations in Landsat 8 images and captured seasonal variation in vegetation indices (R2 > 0.95, for the dual RSS-GAN maps vs. in situ data from all sites). Overall, the dual RSS-GAN reduced Landsat 8 vegetation index underestimations compared with in situ measurements; relative bias values of NDVI ranged from βˆ’3.2% to 1.2% and βˆ’12.4% to βˆ’3.7% for the dual RSS-GAN and Landsat 8, respectively. This improvement was caused by spatial enhancement through the dual RSS-GAN, which captured fine-scale information from Planet Fusion. This study presents a new approach for the restoration of hidden sub-pixel spatial information in Landsat images. Mapping canopy photosynthesis in both high spatial and temporal resolution is essential for carbon cycle monitoring in heterogeneous areas. However, well established satellites in sun-synchronous orbits such as Sentinel-2, Landsat and MODIS can only provide either high spatial or high temporal resolution but not both. Recently established CubeSat satellite constellations have created an opportunity to overcome this resolution trade-off. In particular, Planet Fusion allows full utilization of the CubeSat data resolution and coverage while maintaining high radiometric quality. In Chapter 4, I used the Planet Fusion surface reflectance product to calculate daily, 3-m resolution, gap-free maps of the near-infrared radiation reflected from vegetation (NIRvP). I then evaluated the performance of these NIRvP maps for estimating canopy photosynthesis by comparing with data from a flux tower network in Sacramento-San Joaquin Delta, California, USA. Overall, NIRvP maps captured temporal variations in canopy photosynthesis of individual sites, despite changes in water extent in the wetlands and frequent mowing in the crop fields. When combining data from all sites, however, I found that robust agreement between NIRvP maps and canopy photosynthesis could only be achieved when matching NIRvP maps to the flux tower footprints. In this case of matched footprints, NIRvP maps showed considerably better performance than in situ NIRvP in estimating canopy photosynthesis both for daily sum and data around the time of satellite overpass (R2 = 0.78 vs. 0.60, for maps vs. in situ for the satellite overpass time case). This difference in performance was mostly due to the higher degree of consistency in slopes of NIRvP-canopy photosynthesis relationships across the study sites for flux tower footprint-matched maps. Our results show the importance of matching satellite observations to the flux tower footprint and demonstrate the potential of CubeSat constellation imagery to monitor canopy photosynthesis remotely at high spatio-temporal resolution.Chapter 1. Introduction 2 1. Background 2 1.1 Daily gap-free surface reflectance using geostationary satellite products 2 1.2 Monitoring past vegetation changes with high-spatial-resolution 3 1.3 High spatiotemporal resolution vegetation photosynthesis maps 4 2. Purpose of Research 4 Chapter 2. Generating daily gap-filled BRDF adjusted surface reflectance product at 10 m resolution using geostationary satellite product for monitoring daily canopy photosynthesis 6 1. Introduction 6 2. Methods 11 2.1 Study sites 11 2.2 In situ measurements 13 2.3 Satellite products 14 2.4 Integrated system 17 2.5 Canopy photosynthesis 21 2.6 Evaluation 23 3. Results and discussion 24 3.1 Comparison of STIF NDVI and NIRv with in situ NDVI and NIRv 24 3.2 Comparison of STIF NIRvP with in situ NIRvP 28 4. Conclusion 31 Chapter 3. Super-resolution of historic Landsat imagery using a dual Generative Adversarial Network (GAN) model with CubeSat constellation imagery for monitoring vegetation changes 32 1. Introduction 32 2. Methods 38 2.1 Real-ESRGAN model 38 2.2 Study sites 40 2.3 In situ measurements 42 2.4 Vegetation index 44 2.5 Satellite data 45 2.6 Planet Fusion 48 2.7 Dual RSS-GAN via fine-tuned Real-ESRGAN 49 2.8 Evaluation 54 3. Results 57 3.1 Comparison of NDVI and NIRv maps from Planet Fusion, Sentinel 2 NBAR, and Landsat 8 NBAR data with in situ NDVI and NIRv 57 3.2 Comparison of dual RSS-SRGAN model results with Landsat 8 NDVI and NIRv 60 3.3 Comparison of dual RSS-GAN model results with respect to in situ time-series NDVI and NIRv 63 3.4 Comparison of the dual RSS-GAN model with NDVI and NIRv maps derived from RPAS 66 4. Discussion 70 4.1 Monitoring changes in terrestrial vegetation using the dual RSS-GAN model 70 4.2 CubeSat data in the dual RSS-GAN model 72 4.3 Perspectives and limitations 73 5. Conclusion 78 Appendices 79 Supplementary material 82 Chapter 4. Matching high resolution satellite data and flux tower footprints improves their agreement in photosynthesis estimates 85 1. Introduction 85 2. Methods 89 2.1 Study sites 89 2.2 In situ measurements 92 2.3 Planet Fusion NIRvP 94 2.4 Flux footprint model 98 2.5 Evaluation 98 3. Results 105 3.1 Comparison of Planet Fusion NIRv and NIRvP with in situ NIRv and NIRvP 105 3.2 Comparison of instantaneous Planet Fusion NIRv and NIRvP with against tower GPP estimates 108 3.3 Daily GPP estimation from Planet Fusion -derived NIRvP 114 4. Discussion 118 4.1 Flux tower footprint matching and effects of spatial and temporal resolution on GPP estimation 118 4.2 Roles of radiation component in GPP mapping 123 4.3 Limitations and perspectives 126 5. Conclusion 133 Appendix 135 Supplementary Materials 144 Chapter 5. Conclusion 153 Bibliography 155 Abstract in Korea 199 Acknowledgements 202λ°•

    Manipulation and generation of synthetic satellite images using deep learning models

    Get PDF
    Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images. (c) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy
    • …
    corecore