49 research outputs found
A Study on Image Registration between High Resolution Optical and SAR Images Using SAR-SIFT and DLSS
νμλ
Όλ¬Έ (μμ¬)-- μμΈλνκ΅ λνμ : 곡과λν 건μ€ν경곡νλΆ, 2018. 8. κΉμ©μΌ.μ΅κ·Ό μμ±μΌμ κΈ°μ μ λ°λ¬λ‘ λ€μν μΌμλ₯Ό νμ¬ν μ§κ΅¬κ΄μΈ‘μμ±μ΄ λ°μ¬λλ©΄μ, λ€μ€μΌμ μμ±μμμ μ΅ν© λΆμνλ μ°κ΅¬κ° νλ°ν μ§νλκ³ μλ€. νΉν, κ΄νμμκ³Ό SARμμμ μ·¨κΈνλ νμ₯λκ° λ¬λΌ λμμ νμ©ν κ²½μ° μ§νλ©΄μ λν΄ λ³΄λ€ κ΅¬μ²΄μ μΈ μ 보λ₯Ό μ·¨λν μ μμΌλ©°, λ λμκ°, κ°μ²΄ μΆμΆ, λ³ννμ§, μ¬λμ¬ν΄ λͺ¨λν°λ§ λ± μ격νμ¬ λΆμΌμ νλκ² μ μ©μ΄ κ°λ₯νλ€. μ΄λ₯Ό μν΄μλ μ μ²λ¦¬ μμ
μΌλ‘ λ μμ κ° μ ν©μ΄ νμμ μΌλ‘ μ΄λ£¨μ΄μ ΈμΌ νλ€. κ·Έλ¬λ, κ΄νμμκ³Ό SARμμμ μμ μ·¨λμ μμ±μΌμ μμΈ λ° μ·¨κΈνλ νμ₯λμ μμ΄ν¨μΌλ‘ κΈ°ν λ° λΆκ΄ μ 보 μ°¨μ΄λ₯Ό μ λ°νμ¬ μμ μ ν©μ μμ΄ μ λ
μ΄λ €μμ΄ μ‘΄μ¬νλ€. μ΄λ¬ν μ°¨μ΄λ κ±΄λ¬Όμ΄ λ°μ§λ λμ¬μ§μμμ λΆκ°λλ©°, μ€Β·μ ν΄μλ μμλ³΄λ€ κ³ ν΄μλ μμμμ λλλ¬μ§λ€. λ°λΌμ, λ³Έ μ°κ΅¬μμλ λμ¬μ§μμ λν κ³ ν΄μλ κ΄νμμκ³Ό SARμμ κ° μ ν©μ ν¨κ³Όμ μΈ λ°©λ²λ‘ μ μ μνμλ€.
κΈ°μ‘΄ κ΄νμμκ³Ό SARμμ κ° μ ν© κ΄λ ¨ μ°κ΅¬λ ν¬κ² νΉμ§κΈ°λ° μ ν©κΈ°λ²κ³Ό κ°λκΈ°λ° μ ν©κΈ°λ²μΌλ‘ μ§νλμλ€. κ°λκΈ°λ° μ ν©κΈ°λ²μ λΆκ΄ νΉμ±μ΄ λ€λ₯Έ μμ κ° μ ν©μ ν¨κ³Όμ μ΄λ, μμ κ° μκ³‘μ΄ μ‘΄μ¬νμ§ μκ±°λ κΈ°ννμ μμΉ μ°¨μ΄κ° μ μ λμλ§ μ μ© κ°λ₯νλ€. κ³ ν΄μλ κ΄νμμκ³Ό SARμμμ μ§μμ μκ³‘μ΄ μ‘΄μ¬νλ©°, λ μμ κ° μμm μ΄μμ κΈ°ννμ μμΉ μ°¨μ΄κ° λ°μν μ μλ€. λ°λΌμ, κ³ ν΄μλ κ΄νμμκ³Ό SARμμ κ° μ ν© μ°κ΅¬λ κ°λκΈ°λ° μ ν©κΈ°λ² λ³΄λ€ νΉμ§κΈ°λ° μ ν©κΈ°λ²μ΄ μ€μ μ μΌλ‘ μ§νλκ³ μλ€. κ·Έλ¬λ, νΉμ§κΈ°λ° μ ν©κΈ°λ²μ λΆκ΄ νΉμ±μ΄ λ€λ₯Έ κ΄νμμκ³Ό SARμμμμ μ€μ ν©μμ λ€μ μΆμΆνμ¬ μ ν© μ±λ₯μ΄ λ¨μ΄μ§λ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄, κ°λκΈ°λ° μ ν©κΈ°λ²κ³Ό νΉμ§κΈ°λ° μ ν©κΈ°λ²μ κ²°ν©ν κΈ°λ²λ€μ΄ μ μλμμΌλ, λμ¬μ§μμμ μ νμμ΄ μ‘΄μ¬νλ μ§μμ΄λ 건물λ°μ§μ§μμ μ μΈν μ§μ λ±κ³Ό κ°μ΄ μ νλ μ§μμμλ§ μ μ© κ°λ₯νλ€λ νκ³μ μ 보μλ€. μ΄λ₯Ό κ°μ νκΈ° μν΄, λ³Έ μ°κ΅¬μμλ νΉμ§κΈ°λ° μ ν©κΈ°λ²μΈ SAR-SIFT κΈ°λ²κ³Ό κ°λκΈ°λ° μ ν©κΈ°λ²μΈ DLSS κΈ°λ²μ κ²°ν©ν μ ν©κΈ°λ²μ μ μνμλ€. λν, μ ν©μμ μΆμΆνκΈ° μν΄, μ μ²λ¦¬ λ¨κ³, ν보 μ ν©μ μΆμΆ λ¨κ³, μ λ° μ ν©μ μΆμΆ λ¨κ³μΈ μ΄ μΈ λ¨κ³λ₯Ό μΆκ°νμλ€.
κ³ ν΄μλ κ΄νμμκ³Ό SARμμ κ° μ ν©μ μν΄μ, SAR-SIFT κΈ°λ²μ μ΄μ©νμ¬ νΉμ§μ μ μΆμΆνκ³ , μΆμΆλ νΉμ§μ μμ DLSS κΈ°λ²μ μ΄μ©νμ¬ μ ν©μμ μΆμΆνμλ€. κ·Έλ¬λ, μΆμΆλ μ ν©μμ λ€μμ μ€μ ν©μμ΄ ν¬ν¨λλ λ¬Έμ μ μ΄ μ‘΄μ¬νμλ€. μ΄λ₯Ό ν΄κ²°νκΈ° μν΄, μΆμΆλ μ ν©μμμ μκ³μΉμ νΉμ§μ κ° λ³μλμ μ΄μ©ν μ μ²λ¦¬ λ¨κ³μ ν보 μ ν©μ μΆμΆ λ¨κ³λ₯Ό ν΅ν΄ ν보 μ ν©μμ μΆμΆνκ³ , ν보 μ ν©μμ RANSAC κΈ°λ²μ μ μ©νμ¬ μ λ° μ ν©μμ μΆμΆνλ λ°©λ²μ μ μνμλ€. μ΅μ’
μ μΌλ‘ μΆμΆλ μ λ° μ ν©μμ μ΄μ©νμ¬ μ΄ν λ³νμ(affine transformation)μ ꡬμ±νκ³ , μ΄λ₯Ό μ μ©νμ¬ κ΄νμμμ μ ν©λ SARμμμ μμ±νμλ€.
λ³Έ μ°κ΅¬μ μ νλλ₯Ό κ²μ¦νκΈ° μνμ¬, λνμ μΈ κ³ ν΄μλ κ΄νμμμΈ KOMPSAT-2μμκ³Ό κ³ ν΄μλ SARμμμΈ TerraSAR-X, Cosmo-SkyMedμμμ μ¬μ©νμκ³ , μκ°μ , μ λμ νκ°λ₯Ό μ§ννμλ€. μκ°μ νκ°λ₯Ό μν΄μ λͺ¨μμ΄ν¬ μμμ μμ±νμμΌλ©°, λ μμ κ° κ²½κ³μμ κ°μ²΄μ νμμ΄ μ μ§λ¨μ ν΅ν΄ μ ν©μ΄ μ°μνκ² μνλ¨μ νμΈνμλ€. μ λμ νκ°λ₯Ό μν΄μ μλ κ²μ¬μ μ ν΅ν RMSE β
κ³Ό κ΅μ°¨κ²μ¦μ ν΅ν RMSE β
‘λ₯Ό μ¬μ©νμμΌλ©°, λͺ¨λ μ€νμ§μμ λν΄ RMSE β
μ 1.51mμμ 2.04m, RMSE β
‘λ 1.34mμμ 1.69mλ‘ μ νλκ° λμΆλμλ€. μ΄λ, μ νμ°κ΅¬κ²°κ³Όμ λΉκ΅νμμ λ μ°μν μμ€μ μ νλλ‘ νμΈλμλ€. μ΄λ₯Ό ν΅ν΄, μ μ κΈ°λ²μ΄ κ³ ν΄μλ κ΄νμμκ³Ό SARμμ κ° μ ν©μ ν¨κ³Όμ μ΄λ©°, λ μμ κ° μ΅ν© λΆμμ μν΄ ν¨κ³Όμ μΈ μ ν© κΈ°μ λ‘ νμ©λ κ²μΌλ‘ μ¬λ£λλ€.1. μ λ‘ 1
1.1 μ°κ΅¬λ°°κ²½ 1
1.2 μ°κ΅¬λν₯ 4
1.3 μ°κ΅¬μ λͺ©μ λ° λ²μ 7
2. νΉμ§μ μΆμΆ 10
2.1 μμ μ μ²λ¦¬ 10
2.2 SAR-SIFT κΈ°λ²μ ν΅ν νΉμ§μ μΆμΆ 11
2.2.1. SIFT κΈ°λ²μ λ¬Έμ μ 11
2.2.2. SAR-SIFT κΈ°λ² 15
3. μ ν©μ μΆμΆ 18
3.1 DLSS κΈ°λ²μ ν΅ν μ ν©μ μΆμΆ 18
3.1.1. νμ μμ μ LSS 19
3.1.2. νμ μμ μ λ²‘ν° DLSS 21
3.1.3. DLSS κΈ°λ²μ λ¬Έμ μ 22
3.2 μ μλ μ ν©μ μΆμΆ λ°©λ² 24
3.2.1. μ μ²λ¦¬ λ¨κ³ 24
3.2.2. ν보 μ ν©μ μΆμΆ λ¨κ³ 26
3.2.3. μ λ° μ ν©μ μΆμΆ λ¨κ³ 28
3.3 μ ν© λ° μ νλ νκ° λ°©λ² 29
3.3.1. μ΄ν λ³νμ 29
3.3.2. μ νλ νκ° λ°©λ² 31
4. μ€νμ μ μ© λ° νκ° 32
4.1 μ€νμ§μ λ° μλ£ 32
4.2 νΉμ§μ μΆμΆ κ²°κ³Ό 35
4.2.1. SIFT κΈ°λ²μ ν΅ν νΉμ§μ μΆμΆ κ²°κ³Ό 35
4.2.2. SAR-SIFT κΈ°λ²μ ν΅ν νΉμ§μ μΆμΆ κ²°κ³Ό 37
4.3 μ ν©μ μΆμΆ κ²°κ³Ό 40
4.3.1. κΈ°μ‘΄ κΈ°λ²μ ν΅ν μ ν©μ μΆμΆ κ²°κ³Ό 40
4.3.2. μ μ κΈ°λ²μ ν΅ν μ ν©μ μΆμΆ κ²°κ³Ό 44
4.4 μ ν© κ²°κ³Ό λ° νκ° 49
5. κ²°λ‘ 55
Abstract 67Maste
STELLAR: A LARGE SATELLITE STEREO DATASET FOR DIGITAL SURFACE MODEL GENERATION
Stellar is a large, satellite stereo dataset. It contains rectified stereo pairs of the terrain captured by the satellite image sensors and corresponding true disparity maps and semantic segmentation. Unlike stereo vision in autonomous driving and mobile imaging, a satellite stereo pair is not captured simultaneously. Thus, the same object in a satellite stereo pair is more likely to have a varied visual appearance. Stellar provides flexible access to such stereo pairs to train methods to be robust to such appearance variation. We use publicly available data sources, and invented several techniques to perform data registration, rectification, and semantic segmentation on the data to build Stellar. In our preliminary experiment, we fine-tuned two deep-learning stereo methods on Stellar. The result demonstrates that most of the time, these methods generate denser and more accurate disparity maps for satellite stereo by fine-tuning on Stellar, compared to without fine-tuning on satellite stereo datasets, or fine-tuning on previous, smaller satellite stereo datasets. Stellar is available to download at https://github.com/guo-research-group/Stellar
Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping
The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position βfixβ, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability
Nonparametric image registration of airborne LiDAR, hyperspectral and photographic imagery of wooded landscapes
There is much current interest in using multisensor airborne remote sensing to monitor the structure and biodiversity of woodlands. This paper addresses the application of nonparametric (NP) image-registration techniques to precisely align images obtained from multisensor imaging, which is critical for the successful identification of individual trees using object recognition approaches. NP image registration, in particular, the technique of optimizing an objective function, containing similarity and regularization terms, provides a flexible approach for image registration. Here, we develop a NP registration approach, in which a normalized gradient field is used to quantify similarity, and curvature is used for regularization (NGF-Curv method). Using a survey of woodlands in southern Spain as an example, we show that NGF-Curv can be successful at fusing data sets when there is little prior knowledge about how the data sets are interrelated (i.e., in the absence of ground control points). The validity of NGF-Curv in airborne remote sensing is demonstrated by a series of experiments. We show that NGF-Curv is capable of aligning images precisely, making it a valuable component of algorithms designed to identify objects, such as trees, within multisensor data sets.This work was supported by the Airborne Research and Survey
Facility of the U.K.βs Natural Environment Research Council (NERC) for collecting and preprocessing the data used in this research project [EU11/03/100], and by the grants supported from King Abdullah University of Science Technology and Wellcome Trust (BBSRC). D. Coomes was supported by a grant from NERC (NE/K016377/1) and funding from DEFRA and the BBSRC to develop methods for monitoring ash dieback from aircraft.This is the final version. It was first published by IEEE at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7116541&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_Publication_Number%3A36%29%26pageNumber%3D5
Automated Ortho-Rectification of UAV-Based Hyperspectral Data over an Agricultural Field Using Frame RGB Imagery
Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging is based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude
Advances in Image Processing, Analysis and Recognition Technology
For many decades, researchers have been trying to make computersβ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches