10 research outputs found

    ๋“œ๋ก ์„ ํ™œ์šฉํ•œ ์œ„์„ฑ ์ง€ํ‘œ๋ฐ˜์‚ฌ๋„ ์‚ฐ์ถœ๋ฌผ ๊ณต๊ฐ„ ํŒจํ„ด ๋ถ„์„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝยท์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ƒํƒœ์กฐ๊ฒฝํ•™), 2021.8. ์กฐ๋Œ€์†”.High-resolution satellites are assigned to monitor land surface in detail. The reliable surface reflectance (SR) is the fundamental in terrestrial ecosystem modeling so the temporal and spatial validation is essential. Usually based on multiple ground control points (GCPs), field spectroscopy guarantees the temporal continuity. Due to limited sampling, however, it hardly illustrates the spatial pattern. As a map, the pixelwise spatial variability of SR products is not well-documented. In this study, we introduced drone-based hyperspectral image (HSI) as a reference and compared the map with Sentinel 2 and Landsat 8 SR products on a heterogeneous rice paddy landscape. First, HSI was validated by field spectroscopy and swath overlapping, which assured qualitative radiometric accuracy within the viewing geometry. Second, HSI was matched to the satellite SRs. It involves spectral and spatial aggregation, co-registration and nadir bidirectional reflectance distribution function (BRDF)-adjusted reflectance (NBAR) conversion. Then, we 1) quantified the spatial variability of the satellite SRs and the vegetation indices (VIs) including NDVI and NIRv by APU matrix, 2) qualified them pixelwise by theoretical error budget and 3) examined the improvement by BRDF normalization. Sentinel 2 SR exhibits overall good agreement with drone HSI while the two NIRs are biased up to 10%. Despite the bias in NIR, the NDVI shows a good match on vegetated areas and the NIRv only displays the discrepancy on built-in areas. Landsat 8 SR was biased over the VIS bands (-9 ~ -7.6%). BRDF normalization just contributed to a minor improvement. Our results demonstrate the potential of drone HSI to replace in-situ observation and evaluate SR or atmospheric correction algorithms over the flat terrain. Future researches should replicate the results over the complex terrain and canopy structure (i.e. forest).์›๊ฒฉํƒ์‚ฌ์—์„œ ์ง€ํ‘œ ๋ฐ˜์‚ฌ๋„(SR)๋Š” ์ง€ํ‘œ์ •๋ณด๋ฅผ ๋น„ํŒŒ๊ดด์ ์ด๊ณ  ์ฆ‰๊ฐ์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ์ „๋‹ฌํ•ด์ฃผ๋Š” ๋งค๊ฐœ์ฒด ์—ญํ• ์„ ํ•œ๋‹ค. ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋Š” SR์€ ์œก์ƒ ์ƒํƒœ๊ณ„ ๋ชจ๋ธ๋ง์˜ ๊ธฐ๋ณธ์ด๊ณ , ์ด์— ๋”ฐ๋ผ SR์˜ ์‹œ๊ณต๊ฐ„์  ๊ฒ€์ฆ์ด ์š”๊ตฌ๋œ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ SR์€ ์—ฌ๋Ÿฌ ์ง€์ƒ ๊ธฐ์ค€์ (GCP)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ํ˜„์žฅ ๋ถ„๊ด‘๋ฒ•์„ ํ†ตํ•ด์„œ ์‹œ๊ฐ„์  ์—ฐ์†์„ฑ์ด ๋ณด์žฅ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ˜„์žฅ ๋ถ„๊ด‘๋ฒ•์€ ์ œํ•œ์ ์ธ ์ƒ˜ํ”Œ๋ง์œผ๋กœ ๊ณต๊ฐ„ ํŒจํ„ด์„ ๊ฑฐ์˜ ๋ณด์—ฌ์ฃผ์ง€ ์•Š์•„, ์œ„์„ฑ SR์˜ ํ”ฝ์…€ ๋ณ„ ๊ณต๊ฐ„ ๋ณ€๋™์„ฑ์€ ์ž˜ ๋ถ„์„๋˜์ง€ ์•Š์•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋“œ๋ก  ๊ธฐ๋ฐ˜์˜ ์ดˆ๋ถ„๊ด‘ ์˜์ƒ(HSI)์„ ์ฐธ๊ณ ์ž๋ฃŒ๋กœ ๋„์ž…ํ•˜์—ฌ, ์ด๋ฅผ ์ด์งˆ์ ์ธ ๋…ผ ๊ฒฝ๊ด€์—์„œ Sentinel 2 ๋ฐ Landsat 8 SR๊ณผ ๋น„๊ตํ•˜์˜€๋‹ค. ์šฐ์„ , ๋“œ๋ก  HSI๋Š” ํ˜„์žฅ ๋ถ„๊ด‘๋ฒ• ๋ฐ ๊ฒฝ๋กœ ์ค‘์ฒฉ์„ ํ†ตํ•ด์„œ ๊ด€์ธก๊ฐ๋„ ๋ฒ”์œ„ ๋‚ด์—์„œ ์ •์„ฑ์ ์ธ ๋ฐฉ์‚ฌ ์ธก์ •์„ ๋ณด์žฅํ•œ๋‹ค๊ณ  ๊ฒ€์ฆ๋˜์—ˆ๋‹ค. ์ดํ›„, ๋“œ๋ก  HSI๋Š” ์œ„์„ฑ SR์˜ ๋ถ„๊ด‘๋ฐ˜์‘ํŠน์„ฑ, ๊ณต๊ฐ„ํ•ด์ƒ๋„ ๋ฐ ์ขŒํ‘œ๊ณ„๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋งž์ถฐ์กŒ๊ณ , ๊ด€์ธก ๊ธฐํ•˜๋ฅผ ํ†ต์ผํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋“œ๋ก  HIS์™€ ์œ„์„ฑ SR์€ ๊ฐ๊ฐ ์–‘๋ฐฉํ–ฅ๋ฐ˜์‚ฌ์œจ๋ถ„ํฌํ•จ์ˆ˜ (BRDF) ์ •๊ทœํ™” ๋ฐ˜์‚ฌ๋„ (NBAR)๋กœ ๋ณ€ํ™˜๋˜์—ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, 1) APU ํ–‰๋ ฌ์œผ๋กœ ์œ„์„ฑ SR๊ณผ NDVI, NIRv๋ฅผ ํฌํ•จํ•˜๋Š” ์‹์ƒ์ง€์ˆ˜(VI)์˜ ๊ณต๊ฐ„๋ณ€๋™์„ฑ์„ ์ •๋Ÿ‰ํ™” ํ–ˆ๊ณ , 2) ๋Œ€๊ธฐ๋ณด์ •์˜ ์ด๋ก ์  ์˜ค์ฐจ๋ฅผ ๊ธฐ์ค€์œผ๋กœ SR๊ณผ VI๋ฅผ ํ”ฝ์…€๋ณ„๋กœ ํ‰๊ฐ€ํ–ˆ๊ณ , 3) BRDF ์ •๊ทœํ™”๋ฅผ ํ†ตํ•œ ๊ฐœ์„  ์‚ฌํ•ญ์„ ๊ฒ€ํ† ํ–ˆ๋‹ค. Sentinel 2 SR์€ ๋“œ๋ก  HSI์™€ ์ „๋ฐ˜์ ์œผ๋กœ ์ข‹์€ ์ผ์น˜๋ฅผ ๋ณด์ด๋‚˜, ๋‘ NIR ์ฑ„๋„์€ ์ตœ๋Œ€ 10% ํŽธํ–ฅ๋˜์—ˆ๋‹ค. NIR์˜ ํŽธํ–ฅ์€ ์‹์ƒ์ง€์ˆ˜์—์„œ ํ† ์ง€ ํ”ผ๋ณต์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์˜ํ–ฅ์„ ๋ฏธ์ณค๋‹ค. NDVI๋Š” ์‹์ƒ์—์„œ๋Š” ๋‚ฎ์€ ํŽธํ–ฅ์„ ๋ณด์—ฌ์คฌ๊ณ , NIRv๋Š” ๋„์‹œ์‹œ์„ค๋ฌผ ์˜์—ญ์—์„œ๋งŒ ๋†’์€ ํŽธํ–ฅ์„ ๋ณด์˜€๋‹ค. Landsat 8 SR์€ VIS ์ฑ„๋„์— ๋Œ€ํ•ด ํŽธํ–ฅ๋˜์—ˆ๋‹ค (-9 ~ -7.6%). BRDF ์ •๊ทœํ™”๋Š” ์œ„์„ฑ SR์˜ ํ’ˆ์งˆ์„ ๊ฐœ์„ ํ–ˆ์ง€๋งŒ, ๊ทธ ์˜ํ–ฅ์€ ๋ถ€์ˆ˜์ ์ด์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ‰ํƒ„ํ•œ ์ง€ํ˜•์—์„œ ๋“œ๋ก  HSI๊ฐ€ ํ˜„์žฅ ๊ด€์ธก์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๊ณ , ๋”ฐ๋ผ์„œ ์œ„์„ฑ SR์ด๋‚˜ ๋Œ€๊ธฐ๋ณด์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ‰๊ฐ€ํ•˜๋Š”๋ฐ ํ™œ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€๋‹ค. ํ–ฅํ›„ ์—ฐ๊ตฌ์—์„œ๋Š” ์‚ฐ๋ฆผ์œผ๋กœ ๋Œ€์ƒ์ง€๋ฅผ ํ™•๋Œ€ํ•˜์—ฌ, ์ง€ํ˜•๊ณผ ์บ๋…ธํ”ผ ๊ตฌ์กฐ๊ฐ€ ๋“œ๋ก  HSI ๋ฐ ์œ„์„ฑ SR์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ๋ถ„์„ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค.Chapter 1. Introduction 1 1.1 Background 1 Chapter 2. Method 3 2.1 Study Site 3 2.2 Drone campaign 4 2.3 Data processing 4 2.3.1 Sensor calibration 5 2.3.2 Bidirectional reflectance factor (BRF) calculation 7 2.3.3 BRDF correction 7 2.3.4 Orthorectification 8 2.3.5 Spatial Aggregation 9 2.3.6 Co-registration 10 2.4 Satellite dataset 10 2.4.2 Landsat 8 12 Chapter 3. Result and Discussion 12 3.1 Drone BRF map quality assessment 12 3.1.1 Radiometric accuracy 12 3.1.2 BRDF effect 15 3.2 Spatial variability in satellite surface reflectance product 16 3.2.1 Sentinel 2B (10m) 17 3.2.2 Sentinel 2B (20m) 22 3.2.3 Landsat 8 26 Chapter 4. Conclusion 28 Supplemental Materials 30 Bibliography 34 Abstract in Korean 43์„

    Fusing Multiple Multiband Images

    Full text link
    We consider the problem of fusing an arbitrary number of multiband, i.e., panchromatic, multispectral, or hyperspectral, images belonging to the same scene. We use the well-known forward observation and linear mixture models with Gaussian perturbations to formulate the maximum-likelihood estimator of the endmember abundance matrix of the fused image. We calculate the Fisher information matrix for this estimator and examine the conditions for the uniqueness of the estimator. We use a vector total-variation penalty term together with nonnegativity and sum-to-one constraints on the endmember abundances to regularize the derived maximum-likelihood estimation problem. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. We solve the resultant convex optimization problem using the alternating direction method of multipliers. We utilize the circular convolution theorem in conjunction with the fast Fourier transform to alleviate the computational complexity of the proposed algorithm. Experiments with multiband images constructed from real hyperspectral datasets reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images

    Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) of the 30-m Reflective Wavelength Bands to Sentinel-2 20-m Resolution

    Get PDF
    The Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) method to downscale Landsat-8 Operational Land Imager (OLI) 30-m data to Sentinel-2 multi-spectral instrument (MSI) 20-m resolution is presented. The method first downscales the Landsat-8 30-m OLI bands to 15-m using the spatial detail provided by the Landsat-8 15-m panchromatic band and then reprojects and resamples the downscaled 15-m data into registration with Sentinel-2A 20-m data. The LPAD method is demonstrated using pairs of contemporaneous Landsat-8 OLI and Sentinel-2A MSI images sensed less than 19 min apart over diverse geographic environments. The LPAD method is shown to introduce less spectral and spatial distortion and to provide visually more coherent data than conventional bilinear and cubic convolution resampled 20-m Landsat OLI data. In addition, results for a pair of Landsat-8 and Sentinel-2A images sensed one day apart suggest that image fusion should be undertaken with caution when the images are acquired under different atmospheric conditions. The LPAD source code is available at GitHub for public use

    Object-Based Area-to-Point Regression Kriging for Pansharpening

    Get PDF
    Optical earth observation satellite sensors often provide a coarse spatial resolution (CR) multispectral (MS) image together with a fine spatial resolution (FR) panchromatic (PAN) image. Pansharpening is a technique applied to such satellite sensor images to generate an FR MS image by injecting spatial detail taken from the FR PAN image while simultaneously preserving the spectral information of MS image. Pansharpening methods are mostly applied on a per-pixel basis and use the PAN image to extract spatial detail. However, many land cover objects in FR satellite sensor images are not illustrated as independent pixels, but as many spatially aggregated pixels that contain important semantic information. In this article, an object-based pansharpening approach, termed object-based area-to-point regression kriging (OATPRK), is proposed. OATPRK aims to fuse the MS and PAN images at the object-based scale and, thus, takes advantage of both the unified spectral information within the CR MS images and the spatial detail of the FR PAN image. OATPRK is composed of three stages: image segmentation, object-based regression, and residual downscaling. Three data sets acquired from IKONOS and Worldview-2 and 11 benchmark pansharpening algorithms were used to provide a comprehensive assessment of the proposed OATPRK approach. In both the synthetic and real experiments, OATPRK produced the most superior pan-sharpened results in terms of visual and quantitative assessment. OATPRK is a new conceptual method that advances the pixel-level geostatistical pansharpening approach to the object level and provides more accurate pan-sharpened MS images. IEE

    Tracking small-scale tropical forest disturbances: Fusing the Landsat and Sentinel-2 data record

    Get PDF
    Information on forest disturbance is crucial for tropical forest management and global carbon cycle analysis. The long-term collection of data from the Landsat missions provides some of the most valuable information for understanding the processes of global tropical forest disturbance. However, there are substantial uncertainties in the estimation of non-mechanized, small-scale (i.e., small area) clearings in tropical forests with Landsat series images. Because the appearance of small-scale openings in a tropical tree canopy are often ephemeral due to fast-growing vegetation, and because clouds are frequent in tropical regions, it is challenging for Landsat images to capture the logging signal. Moreover, the spatial resolution of Landsat images is typically too coarse to represent spatial details about small-scale clearings. In this paper, by fusing all available Landsat and Sentinel-2 images, we proposed a method to improve the tracking of small-scale tropical forest disturbance history with both fine spatial and temporal resolutions. First, yearly composited Landsat and Sentinel-2 self-referenced normalized burn ratio (rNBR) vegetation index images were calculated from all available Landsat-7/8 and Sentinel-2 scenes during 2016โ€“2019. Second, a deep-learning based downscaling method was used to predict fine resolution (10 m) rNBR images from the annual coarse resolution (30 m) Landsat rNBR images. Third, given the baseline Landsat forest map in 2015, the generated fine-resolution Landsat rNBR images and original Sentinel-2 rNBR images were fused to produce the 10 m forest disturbance map for the period 2016โ€“2019. From data comparison and evaluation, it was demonstrated that the deep-learning based downscaling method can produce fine-resolution Landsat rNBR images and forest disturbance maps that contain substantial spatial detail. In addition, by fusing downscaled fine-resolution Landsat rNBR images and original Sentinel-2 rNBR images, it was possible to produce state-of-the-art forest disturbance maps with OA values more than 87% and 96% for the small and large study areas, and detected 11% to 21% more disturbed areas than either the Sentinel-2 or Landsat-7/8 time-series alone. We found that 1.42% of the disturbed areas indentified during 2016โ€“2019 experienced multiple forest disturbances. The method has great potential to enhance work undertaken in relation to major policies such as the reducing emissions from deforestation and forest degradation (REDD+) programmes

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Hypercomplex Quality Assessment of Multi/Hyperspectral Images

    No full text
    This letter presents a novel image quality index which extends the Universal Image Quality Index for monochrome images to multispectral and hyperspectral images through hypercomplex numbers. The proposed index is based on the computation of the hypercomplex correlation coefficient between the reference and tested images, which jointly measures spectral and spatial distortions. Experimental results, both from true and simulated images, are presented on spaceborne and airborne visible/infrared images. The results prove accurate measurements of inter- and intraband distortions even when anomalous pixel values are concentrated on few bands
    corecore