6 research outputs found
SUPER-RESOLUTION IMAGING OF REMOTE SENSED BRIGHTNESS TEMPERATURE USING A CONVOLUTIONAL NEURAL NETWORK
Steady improvements to the instruments used in remote sensing has led to much higher resolution data, often contemporaneous with lower resolution instruments that continue to collect data. There is a clear opportunity to reconcile recent high resolution satellite data with the lower resolution data of the past. Super-resolution (SR) imaging is a technique that increases the spatial resolution of image data by training statistical methods on simultaneously occurring lower and higher resolution data sets. The special sensor microwave/imager (SSMI) and advanced microwave scanning radiometer (AMSR2) brightness temperature data products are well suited to super-resolution imaging, and SR can be used to standardize the higher resolution across the entire record of observations. Of the methods used in super-resolution imaging, neural networks have led to major improvements in the realm of computer vision and have seen great success in the super-resolution of photographic images. We trained two neural networks, based on the design of the Resnet, to super-resolution the 25 kilometer resolution SSMI and AMSR2 brightness temperature data products up to a 10 kilometer resolution. The mean error over all frequencies and polarizations for the AMSR and SSMI modelsβ predictions is 0.84% and 2.4% respectively for the years 2013 and 2019
Tarsier: Evolving Noise Injection in Super-Resolution GANs
Super-resolution aims at increasing the resolution and level of detail within
an image. The current state of the art in general single-image super-resolution
is held by NESRGAN+, which injects a Gaussian noise after each residual layer
at training time. In this paper, we harness evolutionary methods to improve
NESRGAN+ by optimizing the noise injection at inference time. More precisely,
we use Diagonal CMA to optimize the injected noise according to a novel
criterion combining quality assessment and realism. Our results are validated
by the PIRM perceptual score and a human study. Our method outperforms NESRGAN+
on several standard super-resolution datasets. More generally, our approach can
be used to optimize any method based on noise injection
Manipulation and generation of synthetic satellite images using deep learning models
Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images. (c) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI
Sentinel2GlobalLULC: A Sentinel-2 RGB image tile dataset for global land use/cover mapping with deep learning
Land-Use and Land-Cover (LULC) mapping is relevant for many applications, from Earth system
and climate modelling to territorial and urban planning. Global LULC products are continuously
developing as remote sensing data and methods grow. However, there still exists low consistency
among LULC products due to low accuracy in some regions and LULC types. Here, we introduce
Sentinel2GlobalLULC, a Sentinel-2 RGB image dataset, built from the spatial-temporal consensus of up
to 15 global LULC maps available in Google Earth Engine. Sentinel2GlobalLULC v2.1 contains 194877
single-class RGB image tiles organized into 29 LULC classes. Each image is a 224 Γ 224 pixels tile at
10 Γ 10 m resolution built as a cloud-free composite from Sentinel-2 images acquired between June
2015 and October 2020. Metadata includes a unique LULC annotation per image, together with level of
consensus, reverse geo-referencing, global human modification index, and number of dates used in the
composite. Sentinel2GlobalLULC is designed for training deep learning models aiming to build precise
and robust global or regional LULC maps.Ministry of Science and Innovation through the FEDER funds from the Spanish Pluriregional Operational Program LifeWatch-2019-10-UGR-01LifeWatch-ERIC action line, within the Workpackages LifeWatch-2019-10-UGR-01 WP-8
LifeWatch-2019-10-UGR-01 WP-7
LifeWatch-2019-10-UGR-01 WP-4European Research Council (ERC)European Commission 647038Center for Forestry Research & Experimentation (CIEF) APOSTD/2021/188
A-RNM-256-UGR18
A-TIC-458-UGR18
PID2020-119478GB-I00
P18-FR-496
Sentinel2GlobalLULC: A Sentinel-2 RGB image tile dataset for global land use/cover mapping with deep learning
Land-Use and Land-Cover (LULC) mapping is relevant for many applications, from Earth system and climate modelling to territorial and urban planning. Global LULC products are continuously developing as remote sensing data and methods grow. However, there still exists low consistency among LULC products due to low accuracy in some regions and LULC types. Here, we introduce Sentinel2GlobalLULC, a Sentinel-2 RGB image dataset, built from the spatial-temporal consensus of up to 15 global LULC maps available in Google Earth Engine. Sentinel2GlobalLULC v2.1 contains 194877 single-class RGB image tiles organized into 29 LULC classes. Each image is a 224βΓβ224 pixels tile at 10βΓβ10βm resolution built as a cloud-free composite from Sentinel-2 images acquired between June 2015 and October 2020. Metadata includes a unique LULC annotation per image, together with level of consensus, reverse geo-referencing, global human modification index, and number of dates used in the composite. Sentinel2GlobalLULC is designed for training deep learning models aiming to build precise and robust global or regional LULC maps.This work is part of the project βThematic Center on Mountain Ecosystem & Remote sensing, Deep learning-AI e-Services University of Granada-Sierra Nevadaβ (LifeWatch-2019-10-UGR-01), which has been co-funded by the Ministry of Science and Innovation through the FEDER funds from the Spanish Pluriregional Operational Program 2014-2020 (POPE), LifeWatch-ERIC action line, within the Workpackages LifeWatch-2019-10-UGR-01 WP-8, LifeWatch-2019-10-UGR-01 WP-7 and LifeWatch-2019-10-UGR-01 WP-4. This work was also supported by projects A-RNM-256-UGR18, A-TIC-458-UGR18, PID2020-119478GB-I00 and P18-FR-4961. E.G. was supported by the European Research Council grant agreement nΒ° 647038 (BIODESERT) and the Generalitat Valenciana, and the European Social Fund (APOSTD/2021/188). We thank the βPrograma de Unidades de Excelencia del Plan Propioβ of the University of Granada for partially covering the article processing charge
μκ³΅κ° ν΄μλ ν₯μμ ν΅ν μμ λ³ν λͺ¨λν°λ§
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : νκ²½λνμ νλκ³Όμ μ‘°κ²½ν, 2023. 2. λ₯μλ ¬.μ‘μ μνκ³μμ λκΈ°κΆκ³Ό μλ¬ΌκΆμ μνΈ μμ©μ μ΄ν΄νκΈ° μν΄μλ μμ λ³νμ λͺ¨λν°λ§μ΄ νμνλ€. μ΄ λ, μμ±μμμ μ§νλ©΄μ κ΄μΈ‘νμ¬ μμμ§λλ₯Ό μ 곡ν μ μμ§λ§, μ§νλ³νμ μμΈν μ 보λ ꡬλ¦μ΄λ μμ± μ΄λ―Έμ§μ κ³΅κ° ν΄μλμ μν΄ μ νλμλ€. λν μμ±μμμ μκ³΅κ° ν΄μλκ° μμμ§λλ₯Ό ν΅ν κ΄ν©μ± λͺ¨λν°λ§μ λ―ΈμΉλ μν₯μ μμ ν λ°νμ§μ§ μμλ€.
λ³Έ λ
Όλ¬Έμμλ κ³ ν΄μλ μμ μ§λλ₯Ό μΌλ¨μλ‘ μμ±νκΈ° μμ± μμμ μκ³΅κ° ν΄μλλ₯Ό ν₯μμν€λ κ²μ λͺ©νλ‘ νμλ€. κ³ ν΄μλ μμ±μμμ νμ©ν μμ λ³ν λͺ¨λν°λ§μ μ곡κ°μ μΌλ‘ νμ₯νκΈ° μν΄ 1) μ μ§κΆ€λ μμ±μ νμ©ν μμμ΅ν©μ ν΅ν΄ μκ°ν΄μλ ν₯μ, 2) μ λμ μμ±λ€νΈμν¬λ₯Ό νμ©ν 곡κ°ν΄μλ ν₯μ, 3) μ곡κ°ν΄μλκ° λμ μμ±μμμ ν μ§νΌλ³΅μ΄ κ· μ§νμ§ μμ 곡κ°μμ μλ¬Ό κ΄ν©μ± λͺ¨λν°λ§μ μννμλ€. μ΄μ²λΌ, μμ±κΈ°λ° μ격νμ§μμ μλ‘μ΄ κΈ°μ μ΄ λ±μ₯ν¨μ λ°λΌ νμ¬ λ° κ³Όκ±°μ μμ±μμμ μκ³΅κ° ν΄μλ μΈ‘λ©΄μμ ν₯μλμ΄ μμ λ³νμ λͺ¨λν°λ§ ν μ μλ€.
μ 2μ₯μμλ μ μ§κΆ€λμμ±μμμ νμ©νλ μκ³΅κ° μμμ΅ν©μΌλ‘ μλ¬Όμ κ΄ν©μ±μ λͺ¨λν°λ§ νμ λ, μκ°ν΄μλκ° ν₯μλ¨μ 보μλ€. μκ³΅κ° μμμ΅ν© μ, ꡬλ¦νμ§, μλ°©ν₯ λ°μ¬ ν¨μ μ‘°μ , κ³΅κ° λ±λ‘, μκ³΅κ° μ΅ν©, μκ³΅κ° κ²°μΈ‘μΉ λ³΄μ λ±μ κ³Όμ μ κ±°μΉλ€. μ΄ μμμ΅ν© μ°μΆλ¬Όμ κ²½μκ΄λ¦¬ λ±μΌλ‘ μμ μ§μμ μ°κ° λ³λμ΄ ν° λ μ₯μ(λκ²½μ§μ λμ½μλ¦Ό)μμ νκ°νμλ€. κ·Έ κ²°κ³Ό, μκ³΅κ° μμμ΅ν© μ°μΆλ¬Όμ κ²°μΈ‘μΉ μμ΄ νμ₯κ΄μΈ‘μ μμΈ‘νμλ€ (R2 = 0.71, μλ νΈν₯ = 5.64% λκ²½μ§; R2 = 0.79, μλ νΈν₯ = -13.8%, νμ½μλ¦Ό). μκ³΅κ° μμμ΅ν©μ μμ μ§λμ μκ³΅κ° ν΄μλλ₯Ό μ μ§μ μΌλ‘ κ°μ νμ¬, μλ¬Ό μμ₯κΈ°λμ μμ±μμμ΄ νμ₯ κ΄μΈ‘μ κ³Όμ νκ°λ₯Ό μ€μλ€. μμμ΅ν©μ λμ μκ³΅κ° ν΄μλλ‘ κ΄ν©μ± μ§λλ₯Ό μΌκ°κ²©μΌλ‘ μμ±νκΈ°μ μ΄λ₯Ό νμ©νμ¬ μμ± μμμ μ νλ μκ³΅κ° ν΄μλλ‘ λ°νμ§μ§ μμ μλ¬Όλ³νμ κ³Όμ μ λ°κ²¬νκΈΈ κΈ°λνλ€.
μμμ 곡κ°λΆν¬μ μ λ°λμ
κ³Ό ν μ§ νΌλ³΅ λ³ν λͺ¨λν°λ§μ μν΄ νμμ μ΄λ€. κ³ ν΄μλ μμ±μμμΌλ‘ μ§κ΅¬ νλ©΄μ κ΄μΈ‘νλ κ²μ μ©μ΄νκ² ν΄μ‘λ€. νΉν Planet Fusionμ μ΄μνμμ±κ΅° λ°μ΄ν°λ₯Ό μ΅λν νμ©ν΄ λ°μ΄ν° κ²°μΈ‘μ΄ μλ 3m κ³΅κ° ν΄μλμ μ§ν νλ©΄ λ°μ¬λμ΄λ€. κ·Έλ¬λ κ³Όκ±° μμ± μΌμ(Landsatμ κ²½μ° 30~60m)μ κ³΅κ° ν΄μλλ μμμ 곡κ°μ λ³νλ₯Ό μμΈ λΆμνλ κ²μ μ ννλ€. μ 3μ₯μμλ Landsat λ°μ΄ν°μ κ³΅κ° ν΄μλλ₯Ό ν₯μνκΈ° μν΄ Planet Fusion λ° Landsat 8 λ°μ΄ν°λ₯Ό μ¬μ©νμ¬ μ΄μ€ μ λμ μμ± λ€νΈμν¬(the dual RSS-GAN)λ₯Ό νμ΅μμΌ, κ³ ν΄μλ μ κ·ν μμ μ§μ(NDVI)μ μλ¬Ό κ·Όμ μΈμ λ°μ¬(NIRv)λλ₯Ό μμ±νλ νλ€. νμκΈ°λ° νμ₯ μμμ§μ(μ΅λ 8λ
)μ λλ‘ κΈ°λ° μ΄λΆκ΄μ§λλ‘ the dual RSS-GANμ μ±λ₯μ λνλ―Όκ΅ λ΄ λ λμμ§(λκ²½μ§μ νμ½μλ¦Ό)μμ νκ°νλ€. The dual RSS-GANμ Landsat 8 μμμ 곡κ°ν΄μλλ₯Ό ν₯μμμΌ κ³΅κ° ννμ 보μνκ³ μμ μ§μμ κ³μ μ λ³νλ₯Ό ν¬μ°©νλ€(R2> 0.96). κ·Έλ¦¬κ³ the dual RSS-GANμ Landsat 8 μμ μ§μκ° νμ₯μ λΉν΄ κ³Όμ νκ°λλ κ²μ μννλ€. νμ₯ κ΄μΈ‘μ λΉν΄ μ΄μ€ RSS-GANκ³Ό Landsat 8μ μλ νΈν₯ κ° κ°κ° -0.8% μμ -1.5%, -10.3% μμ -4.6% μλ€. μ΄λ¬ν κ°μ μ Planet Fusionμ 곡κ°μ 보λ₯Ό μ΄μ€ RSS-GANλ‘ νμ΅νμκΈ°μ κ°λ₯νλ€. ν€λΉ μ°κ΅¬ κ²°κ³Όλ Landsat μμμ κ³΅κ° ν΄μλλ₯Ό ν₯μμμΌ μ¨κ²¨μ§ κ³΅κ° μ 보λ₯Ό μ 곡νλ μλ‘μ΄ μ κ·Ό λ°©μμ΄λ€.
κ³ ν΄μλμμ μλ¬Ό κ΄ν©μ± μ§λλ ν μ§νΌλ³΅μ΄ 볡μ‘ν 곡κ°μμ νμ μν λͺ¨λν°λ§μ νμμ μ΄λ€. κ·Έλ¬λ Sentinel-2, Landsat λ° MODISμ κ°μ΄ νμ λμ‘° κΆ€λμ μλ μμ±μ κ³΅κ° ν΄μλκ° λκ±°λ μκ° ν΄μλ λμ μμ±μμλ§ μ 곡ν μ μλ€. μ΅κ·Ό λ°μ¬λ μ΄μνμμ±κ΅°μ μ΄λ¬ν ν΄μλ νκ³μ 극볡ν μ μλ€. νΉν Planet Fusionμ μ΄μνμμ± μλ£μ μκ³΅κ° ν΄μλλ‘ μ§νλ©΄μ κ΄μΈ‘ν μ μλ€. 4μ₯μμ, Planet Fusion μ§νλ°μ¬λλ₯Ό μ΄μ©νμ¬ μμμμ λ°μ¬λ κ·Όμ μΈμ 볡μ¬(NIRvP)λ₯Ό 3m ν΄μλ μ§λλ₯Ό μΌκ°κ²©μΌλ‘ μμ±νλ€. κ·Έλ° λ€μ λ―Έκ΅ μΊλ¦¬ν¬λμμ£Ό μν¬λΌλ©ν -μ νΈμν¨ λΈνμ νλμ€ νμ λ€νΈμν¬ λ°μ΄ν°μ λΉκ΅νμ¬ μλ¬Ό κ΄ν©μ±μ μΆμ νκΈ° μν NIRvP μ§λμ μ±λ₯μ νκ°νμλ€. μ 체μ μΌλ‘ NIRvP μ§λλ μ΅μ§μ μ¦μ μμ λ³νμλ λΆκ΅¬νκ³ κ°λ³ λμμ§μ μλ¬Ό κ΄ν©μ±μ μκ°μ λ³νλ₯Ό ν¬μ°©νμλ€. κ·Έλ¬λ λμμ§ μ 체μ λν NIRvP μ§λμ μλ¬Ό κ΄ν©μ± μ¬μ΄μ κ΄κ³λ NIRvP μ§λλ₯Ό νλμ€ νμ κ΄μΈ‘λ²μμ μΌμΉμν¬ λλ§ λμ μκ΄κ΄κ³λ₯Ό 보μλ€. κ΄μΈ‘λ²μλ₯Ό μΌμΉμν¬ κ²½μ°, NIRvP μ§λλ μλ¬Ό κ΄ν©μ±μ μΆμ νλ λ° μμ΄ νμ₯ NIRvPλ³΄λ€ μ°μν μ±λ₯μ 보μλ€. μ΄λ¬ν μ±λ₯ μ°¨μ΄λ νλμ€ νμ κ΄μΈ‘λ²μλ₯Ό μΌμΉμν¬ λ, μ°κ΅¬ λμμ§ κ°μ NIRvP-μλ¬Ό κ΄ν©μ± κ΄κ³μ κΈ°μΈκΈ°κ° μΌκ΄μ±μ 보μκΈ° λλ¬Έμ΄λ€. λ³Έ μ°κ΅¬ κ²°κ³Όλ μμ± κ΄μΈ‘μ νλμ€ νμ κ΄μΈ‘λ²μμ μΌμΉμν€λ κ²μ μ€μμ±μ 보μ¬μ£Όκ³ λμ μκ³΅κ° ν΄μλλ‘ μλ¬Ό κ΄ν©μ±μ μ격μΌλ‘ λͺ¨λν°λ§νλ μ΄μνμμ±κ΅° μλ£μ μ μ¬λ ₯μ 보μ¬μ€λ€.Monitoring changes in terrestrial vegetation is essential to understanding interactions between atmosphere and biosphere, especially terrestrial ecosystem. To this end, satellite remote sensing offer maps for examining land surface in different scales. However, the detailed information was hindered under the clouds or limited by the spatial resolution of satellite imagery. Moreover, the impacts of spatial and temporal resolution in photosynthesis monitoring were not fully revealed.
In this dissertation, I aimed to enhance the spatial and temporal resolution of satellite imagery towards daily gap-free vegetation maps with high spatial resolution. In order to expand vegetation change monitoring in time and space using high-resolution satellite images, I 1) improved temporal resolution of satellite dataset through image fusion using geostationary satellites, 2) improved spatial resolution of satellite dataset using generative adversarial networks, and 3) showed the use of high spatiotemporal resolution maps for monitoring plant photosynthesis especially over heterogeneous landscapes. With the advent of new techniques in satellite remote sensing, current and past datasets can be fully utilized for monitoring vegetation changes in the respect of spatial and temporal resolution.
In Chapter 2, I developed the integrated system that implemented geostationary satellite products in the spatiotemporal image fusion method for monitoring canopy photosynthesis. The integrated system contains the series of process (i.e., cloud masking, nadir bidirectional reflectance function adjustment, spatial registration, spatiotemporal image fusion, spatial gap-filling, temporal-gap-filling). I conducted the evaluation of the integrated system over heterogeneous rice paddy landscape where the drastic land cover changes were caused by cultivation management and deciduous forest where consecutive changes occurred in time. The results showed that the integrated system well predict in situ measurements without data gaps (R2 = 0.71, relative bias = 5.64% at rice paddy site; R2 = 0.79, relative bias = -13.8% at deciduous forest site). The integrated system gradually improved the spatiotemporal resolution of vegetation maps, reducing the underestimation of in situ measurements, especially during peak growing season. Since the integrated system generates daily canopy photosynthesis maps for monitoring dynamics among regions of interest worldwide with high spatial resolution. I anticipate future efforts to reveal the hindered information by the limited spatial and temporal resolution of satellite imagery.
Detailed spatial representations of terrestrial vegetation are essential for precision agricultural applications and the monitoring of land cover changes in heterogeneous landscapes. The advent of satellite-based remote sensing has facilitated daily observations of the Earths surface with high spatial resolution. In particular, a data fusion product such as Planet Fusion has realized the delivery of daily, gap-free surface reflectance data with 3-m pixel resolution through full utilization of relatively recent (i.e., 2018-) CubeSat constellation data. However, the spatial resolution of past satellite sensors (i.e., 30β60 m for Landsat) has restricted the detailed spatial analysis of past changes in vegetation. In Chapter 3, to overcome the spatial resolution constraint of Landsat data for long-term vegetation monitoring, we propose a dual remote-sensing super-resolution generative adversarial network (dual RSS-GAN) combining Planet Fusion and Landsat 8 data to simulate spatially enhanced long-term time-series of the normalized difference vegetation index (NDVI) and near-infrared reflectance from vegetation (NIRv). We evaluated the performance of the dual RSS-GAN against in situ tower-based continuous measurements (up to 8 years) and remotely piloted aerial system-based maps of cropland and deciduous forest in the Republic of Korea. The dual RSS-GAN enhanced spatial representations in Landsat 8 images and captured seasonal variation in vegetation indices (R2 > 0.95, for the dual RSS-GAN maps vs. in situ data from all sites). Overall, the dual RSS-GAN reduced Landsat 8 vegetation index underestimations compared with in situ measurements; relative bias values of NDVI ranged from β3.2% to 1.2% and β12.4% to β3.7% for the dual RSS-GAN and Landsat 8, respectively. This improvement was caused by spatial enhancement through the dual RSS-GAN, which captured fine-scale information from Planet Fusion. This study presents a new approach for the restoration of hidden sub-pixel spatial information in Landsat images.
Mapping canopy photosynthesis in both high spatial and temporal resolution is essential for carbon cycle monitoring in heterogeneous areas. However, well established satellites in sun-synchronous orbits such as Sentinel-2, Landsat and MODIS can only provide either high spatial or high temporal resolution but not both. Recently established CubeSat satellite constellations have created an opportunity to overcome this resolution trade-off. In particular, Planet Fusion allows full utilization of the CubeSat data resolution and coverage while maintaining high radiometric quality. In Chapter 4, I used the Planet Fusion surface reflectance product to calculate daily, 3-m resolution, gap-free maps of the near-infrared radiation reflected from vegetation (NIRvP). I then evaluated the performance of these NIRvP maps for estimating canopy photosynthesis by comparing with data from a flux tower network in Sacramento-San Joaquin Delta, California, USA. Overall, NIRvP maps captured temporal variations in canopy photosynthesis of individual sites, despite changes in water extent in the wetlands and frequent mowing in the crop fields. When combining data from all sites, however, I found that robust agreement between NIRvP maps and canopy photosynthesis could only be achieved when matching NIRvP maps to the flux tower footprints. In this case of matched footprints, NIRvP maps showed considerably better performance than in situ NIRvP in estimating canopy photosynthesis both for daily sum and data around the time of satellite overpass (R2 = 0.78 vs. 0.60, for maps vs. in situ for the satellite overpass time case). This difference in performance was mostly due to the higher degree of consistency in slopes of NIRvP-canopy photosynthesis relationships across the study sites for flux tower footprint-matched maps. Our results show the importance of matching satellite observations to the flux tower footprint and demonstrate the potential of CubeSat constellation imagery to monitor canopy photosynthesis remotely at high spatio-temporal resolution.Chapter 1. Introduction 2
1. Background 2
1.1 Daily gap-free surface reflectance using geostationary satellite products 2
1.2 Monitoring past vegetation changes with high-spatial-resolution 3
1.3 High spatiotemporal resolution vegetation photosynthesis maps 4
2. Purpose of Research 4
Chapter 2. Generating daily gap-filled BRDF adjusted surface reflectance product at 10 m resolution using geostationary satellite product for monitoring daily canopy photosynthesis 6
1. Introduction 6
2. Methods 11
2.1 Study sites 11
2.2 In situ measurements 13
2.3 Satellite products 14
2.4 Integrated system 17
2.5 Canopy photosynthesis 21
2.6 Evaluation 23
3. Results and discussion 24
3.1 Comparison of STIF NDVI and NIRv with in situ NDVI and NIRv 24
3.2 Comparison of STIF NIRvP with in situ NIRvP 28
4. Conclusion 31
Chapter 3. Super-resolution of historic Landsat imagery using a dual Generative Adversarial Network (GAN) model with CubeSat constellation imagery for monitoring vegetation changes 32
1. Introduction 32
2. Methods 38
2.1 Real-ESRGAN model 38
2.2 Study sites 40
2.3 In situ measurements 42
2.4 Vegetation index 44
2.5 Satellite data 45
2.6 Planet Fusion 48
2.7 Dual RSS-GAN via fine-tuned Real-ESRGAN 49
2.8 Evaluation 54
3. Results 57
3.1 Comparison of NDVI and NIRv maps from Planet Fusion, Sentinel 2 NBAR, and Landsat 8 NBAR data with in situ NDVI and NIRv 57
3.2 Comparison of dual RSS-SRGAN model results with Landsat 8 NDVI and NIRv 60
3.3 Comparison of dual RSS-GAN model results with respect to in situ time-series NDVI and NIRv 63
3.4 Comparison of the dual RSS-GAN model with NDVI and NIRv maps derived from RPAS 66
4. Discussion 70
4.1 Monitoring changes in terrestrial vegetation using the dual RSS-GAN model 70
4.2 CubeSat data in the dual RSS-GAN model 72
4.3 Perspectives and limitations 73
5. Conclusion 78
Appendices 79
Supplementary material 82
Chapter 4. Matching high resolution satellite data and flux tower footprints improves their agreement in photosynthesis estimates 85
1. Introduction 85
2. Methods 89
2.1 Study sites 89
2.2 In situ measurements 92
2.3 Planet Fusion NIRvP 94
2.4 Flux footprint model 98
2.5 Evaluation 98
3. Results 105
3.1 Comparison of Planet Fusion NIRv and NIRvP with in situ NIRv and NIRvP 105
3.2 Comparison of instantaneous Planet Fusion NIRv and NIRvP with against tower GPP estimates 108
3.3 Daily GPP estimation from Planet Fusion -derived NIRvP 114
4. Discussion 118
4.1 Flux tower footprint matching and effects of spatial and temporal resolution on GPP estimation 118
4.2 Roles of radiation component in GPP mapping 123
4.3 Limitations and perspectives 126
5. Conclusion 133
Appendix 135
Supplementary Materials 144
Chapter 5. Conclusion 153
Bibliography 155
Abstract in Korea 199
Acknowledgements 202λ°