147 research outputs found

    Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint

    Get PDF
    Improving the matching reliability of multi-sensor imagery is one of the most challenging issues in recent years, particularly for synthetic aperture radar (SAR) and optical images. It is difficult to deal with the noise influence, geometric distortions, and nonlinear radiometric difference between SAR and optical images. In this paper, a method for SAR and optical images matching is proposed. First, interest points that are robust to speckle noise in SAR images are detected by improving the original phase-congruency-based detector. Second, feature descriptors are constructed for all interest points by combining a new Gaussian-Gamma-shaped bi-windows-based gradient operator and the histogram of oriented gradient pattern. Third, descriptor similarity and geometrical relationship are combined to constrain the matching processing. Finally, an approach based on global and local constraints is proposed to eliminate outliers. In the experiments, SAR images including COSMO-Skymed, RADARSAT-2, TerraSAR-X and HJ-1C images, and optical images including ZY-3 and Google Earth images are used to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method provides significant improvements in the number of correct matches and matching precision compared with the state-of-the-art SIFT-like methods. Near 1 pixel registration accuracy is obtained based on the matching results of the proposed method

    FEATURE MATCHING ENHANCEMENT OF UAV IMAGES USING GEOMETRIC CONSTRAINTS

    Get PDF
    Preliminary matching of image features is based on the distance between their descriptors. Matches are further filtered using RANSAC, or a similar method that fits the matches to a model; usually the fundamental matrix and rejects matches not belonging to that model. There are a few issues with this scheme. First, mismatches are no longer considered after RANSAC rejection. Second, RANSAC might fail to detect an accurate model if the number of outliers is significant. Third, a fundamental matrix model could be degenerate even if the matches are all inliers. To address these issues, a new method is proposed that relies on the prior knowledge of the imagesโ€™ geometry, which can be obtained from the orientation sensors or a set of initial matches. Using a set of initial matches, a fundamental matrix and a global homography can be estimated. These two entities are then used with a detect-and-match strategy to gain more accurate matches. Features are detected in one image, then the locations of their correspondences in the other image are predicted using the epipolar constraints and the global homography. The feature correspondences are then corrected with template matching. Since global homography is only valid with a plane-to-plane mapping, discrepancy vectors are introduced to represent an alternative to local homographies. The method was tested on Unmanned Aerial Vehicle (UAV) images, where the images are usually taken successively, and differences in scale and orientation are not an issue. The method promises to find a well-distributed set of matches over the scene structure, especially with scenes of multiple depths. Furthermore; the number of outliers is reduced, encouraging to use a least square adjustment instead of RANSAC, to fit a non-degenerate model

    Interferometric Synthetic Aperture RADAR and Radargrammetry towards the Categorization of Building Changes

    Get PDF
    The purpose of this work is the investigation of SAR techniques relying on multi image acquisition for fully automatic and rapid change detection analysis at building level. In particular, the benefits and limitations of a complementary use of two specific SAR techniques, InSAR and radargrammetry, in an emergency context are examined in term of quickness, globality and accuracy. The analysis is performed using spaceborne SAR data

    A Study on Image Registration between High Resolution Optical and SAR Images Using SAR-SIFT and DLSS

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ฑด์„คํ™˜๊ฒฝ๊ณตํ•™๋ถ€, 2018. 8. ๊น€์šฉ์ผ.์ตœ๊ทผ ์œ„์„ฑ์„ผ์„œ ๊ธฐ์ˆ ์˜ ๋ฐœ๋‹ฌ๋กœ ๋‹ค์–‘ํ•œ ์„ผ์„œ๋ฅผ ํƒ‘์žฌํ•œ ์ง€๊ตฌ๊ด€์ธก์œ„์„ฑ์ด ๋ฐœ์‚ฌ๋˜๋ฉด์„œ, ๋‹ค์ค‘์„ผ์„œ ์œ„์„ฑ์˜์ƒ์„ ์œตํ•ฉ ๋ถ„์„ํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ํŠนํžˆ, ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ์€ ์ทจ๊ธ‰ํ•˜๋Š” ํŒŒ์žฅ๋Œ€๊ฐ€ ๋‹ฌ๋ผ ๋™์‹œ์— ํ™œ์šฉํ•  ๊ฒฝ์šฐ ์ง€ํ‘œ๋ฉด์— ๋Œ€ํ•ด ๋ณด๋‹ค ๊ตฌ์ฒด์ ์ธ ์ •๋ณด๋ฅผ ์ทจ๋“ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋” ๋‚˜์•„๊ฐ€, ๊ฐ์ฒด ์ถ”์ถœ, ๋ณ€ํ™”ํƒ์ง€, ์žฌ๋‚œ์žฌํ•ด ๋ชจ๋‹ˆํ„ฐ๋ง ๋“ฑ ์›๊ฒฉํƒ์‚ฌ ๋ถ„์•ผ์— ํญ๋„“๊ฒŒ ์ ์šฉ์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ์ „์ฒ˜๋ฆฌ ์ž‘์—…์œผ๋กœ ๋‘ ์˜์ƒ ๊ฐ„ ์ •ํ•ฉ์ด ํ•„์ˆ˜์ ์œผ๋กœ ์ด๋ฃจ์–ด์ ธ์•ผ ํ•œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ์€ ์˜์ƒ ์ทจ๋“์‹œ ์œ„์„ฑ์„ผ์„œ ์ž์„ธ ๋ฐ ์ทจ๊ธ‰ํ•˜๋Š” ํŒŒ์žฅ๋Œ€์˜ ์ƒ์ดํ•จ์œผ๋กœ ๊ธฐํ•˜ ๋ฐ ๋ถ„๊ด‘ ์ •๋ณด ์ฐจ์ด๋ฅผ ์œ ๋ฐœํ•˜์—ฌ ์˜์ƒ ์ •ํ•ฉ์— ์žˆ์–ด ์œ ๋… ์–ด๋ ค์›€์ด ์กด์žฌํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์ฐจ์ด๋Š” ๊ฑด๋ฌผ์ด ๋ฐ€์ง‘๋œ ๋„์‹ฌ์ง€์—ญ์—์„œ ๋ถ€๊ฐ๋˜๋ฉฐ, ์ค‘ยท์ €ํ•ด์ƒ๋„ ์˜์ƒ๋ณด๋‹ค ๊ณ ํ•ด์ƒ๋„ ์˜์ƒ์—์„œ ๋‘๋“œ๋Ÿฌ์ง„๋‹ค. ๋”ฐ๋ผ์„œ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋„์‹ฌ์ง€์—ญ์— ๋Œ€ํ•œ ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ ๊ฐ„ ์ •ํ•ฉ์— ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๊ธฐ์กด ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ ๊ฐ„ ์ •ํ•ฉ ๊ด€๋ จ ์—ฐ๊ตฌ๋Š” ํฌ๊ฒŒ ํŠน์ง•๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•๊ณผ ๊ฐ•๋„๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์œผ๋กœ ์ง„ํ–‰๋˜์—ˆ๋‹ค. ๊ฐ•๋„๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์€ ๋ถ„๊ด‘ ํŠน์„ฑ์ด ๋‹ค๋ฅธ ์˜์ƒ ๊ฐ„ ์ •ํ•ฉ์— ํšจ๊ณผ์ ์ด๋‚˜, ์˜์ƒ ๊ฐ„ ์™œ๊ณก์ด ์กด์žฌํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ๊ธฐํ•˜ํ•™์  ์œ„์น˜ ์ฐจ์ด๊ฐ€ ์ ์„ ๋•Œ์—๋งŒ ์ ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ์€ ์ง€์—ญ์  ์™œ๊ณก์ด ์กด์žฌํ•˜๋ฉฐ, ๋‘ ์˜์ƒ ๊ฐ„ ์ˆ˜์‹ญm ์ด์ƒ์˜ ๊ธฐํ•˜ํ•™์  ์œ„์น˜ ์ฐจ์ด๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ, ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ ๊ฐ„ ์ •ํ•ฉ ์—ฐ๊ตฌ๋Š” ๊ฐ•๋„๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ• ๋ณด๋‹ค ํŠน์ง•๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์ด ์ค‘์ ์ ์œผ๋กœ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ํŠน์ง•๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์€ ๋ถ„๊ด‘ ํŠน์„ฑ์ด ๋‹ค๋ฅธ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ์—์„œ ์˜ค์ •ํ•ฉ์Œ์„ ๋‹ค์ˆ˜ ์ถ”์ถœํ•˜์—ฌ ์ •ํ•ฉ ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง„๋‹ค. ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, ๊ฐ•๋„๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•๊ณผ ํŠน์ง•๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์„ ๊ฒฐํ•ฉํ•œ ๊ธฐ๋ฒ•๋“ค์ด ์ œ์•ˆ๋˜์—ˆ์œผ๋‚˜, ๋„์‹ฌ์ง€์—ญ์—์„œ ์› ํ˜•์ƒ์ด ์กด์žฌํ•˜๋Š” ์ง€์—ญ์ด๋‚˜ ๊ฑด๋ฌผ๋ฐ€์ง‘์ง€์—ญ์„ ์ œ์™ธํ•œ ์ง€์—ญ ๋“ฑ๊ณผ ๊ฐ™์ด ์ œํ•œ๋œ ์ง€์—ญ์—์„œ๋งŒ ์ ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ํ•œ๊ณ„์ ์„ ๋ณด์˜€๋‹ค. ์ด๋ฅผ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํŠน์ง•๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์ธ SAR-SIFT ๊ธฐ๋ฒ•๊ณผ ๊ฐ•๋„๊ธฐ๋ฐ˜ ์ •ํ•ฉ๊ธฐ๋ฒ•์ธ DLSS ๊ธฐ๋ฒ•์„ ๊ฒฐํ•ฉํ•œ ์ •ํ•ฉ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์ •ํ•ฉ์Œ์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด, ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„, ํ›„๋ณด ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋‹จ๊ณ„, ์ •๋ฐ€ ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋‹จ๊ณ„์ธ ์ด ์„ธ ๋‹จ๊ณ„๋ฅผ ์ถ”๊ฐ€ํ•˜์˜€๋‹ค. ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ ๊ฐ„ ์ •ํ•ฉ์„ ์œ„ํ•ด์„œ, SAR-SIFT ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ํŠน์ง•์ ์„ ์ถ”์ถœํ•˜๊ณ , ์ถ”์ถœ๋œ ํŠน์ง•์ ์—์„œ DLSS ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ์ •ํ•ฉ์Œ์„ ์ถ”์ถœํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ถ”์ถœ๋œ ์ •ํ•ฉ์Œ์— ๋‹ค์ˆ˜์˜ ์˜ค์ •ํ•ฉ์Œ์ด ํฌํ•จ๋˜๋Š” ๋ฌธ์ œ์ ์ด ์กด์žฌํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, ์ถ”์ถœ๋œ ์ •ํ•ฉ์Œ์—์„œ ์ž„๊ณ„์น˜์™€ ํŠน์ง•์  ๊ฐ„ ๋ณ€์œ„๋Ÿ‰์„ ์ด์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์™€ ํ›„๋ณด ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋‹จ๊ณ„๋ฅผ ํ†ตํ•ด ํ›„๋ณด ์ •ํ•ฉ์Œ์„ ์ถ”์ถœํ•˜๊ณ , ํ›„๋ณด ์ •ํ•ฉ์Œ์— RANSAC ๊ธฐ๋ฒ•์„ ์ ์šฉํ•˜์—ฌ ์ •๋ฐ€ ์ •ํ•ฉ์Œ์„ ์ถ”์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ตœ์ข…์ ์œผ๋กœ ์ถ”์ถœ๋œ ์ •๋ฐ€ ์ •ํ•ฉ์Œ์„ ์ด์šฉํ•˜์—ฌ ์–ดํ•€ ๋ณ€ํ™˜์‹(affine transformation)์„ ๊ตฌ์„ฑํ•˜๊ณ , ์ด๋ฅผ ์ ์šฉํ•˜์—ฌ ๊ด‘ํ•™์˜์ƒ์— ์ •ํ•ฉ๋œ SAR์˜์ƒ์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ์ •ํ™•๋„๋ฅผ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•˜์—ฌ, ๋Œ€ํ‘œ์ ์ธ ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ์ธ KOMPSAT-2์˜์ƒ๊ณผ ๊ณ ํ•ด์ƒ๋„ SAR์˜์ƒ์ธ TerraSAR-X, Cosmo-SkyMed์˜์ƒ์„ ์‚ฌ์šฉํ•˜์˜€๊ณ , ์‹œ๊ฐ์ , ์ •๋Ÿ‰์  ํ‰๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์‹œ๊ฐ์  ํ‰๊ฐ€๋ฅผ ์œ„ํ•ด์„œ ๋ชจ์ž์ดํฌ ์˜์ƒ์„ ์ƒ์„ฑํ•˜์˜€์œผ๋ฉฐ, ๋‘ ์˜์ƒ ๊ฐ„ ๊ฒฝ๊ณ„์—์„œ ๊ฐ์ฒด์˜ ํ˜•์ƒ์ด ์œ ์ง€๋จ์„ ํ†ตํ•ด ์ •ํ•ฉ์ด ์šฐ์ˆ˜ํ•˜๊ฒŒ ์ˆ˜ํ–‰๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ์ •๋Ÿ‰์  ํ‰๊ฐ€๋ฅผ ์œ„ํ•ด์„œ ์ˆ˜๋™ ๊ฒ€์‚ฌ์ ์„ ํ†ตํ•œ RMSE โ… ๊ณผ ๊ต์ฐจ๊ฒ€์ฆ์„ ํ†ตํ•œ RMSE โ…ก๋ฅผ ์‚ฌ์šฉํ•˜์˜€์œผ๋ฉฐ, ๋ชจ๋“  ์‹คํ—˜์ง€์—ญ์— ๋Œ€ํ•ด RMSE โ… ์€ 1.51m์—์„œ 2.04m, RMSE โ…ก๋Š” 1.34m์—์„œ 1.69m๋กœ ์ •ํ™•๋„๊ฐ€ ๋„์ถœ๋˜์—ˆ๋‹ค. ์ด๋Š”, ์„ ํ–‰์—ฐ๊ตฌ๊ฒฐ๊ณผ์™€ ๋น„๊ตํ•˜์˜€์„ ๋•Œ ์šฐ์ˆ˜ํ•œ ์ˆ˜์ค€์˜ ์ •ํ™•๋„๋กœ ํ™•์ธ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด, ์ œ์•ˆ ๊ธฐ๋ฒ•์ด ๊ณ ํ•ด์ƒ๋„ ๊ด‘ํ•™์˜์ƒ๊ณผ SAR์˜์ƒ ๊ฐ„ ์ •ํ•ฉ์— ํšจ๊ณผ์ ์ด๋ฉฐ, ๋‘ ์˜์ƒ ๊ฐ„ ์œตํ•ฉ ๋ถ„์„์„ ์œ„ํ•ด ํšจ๊ณผ์ ์ธ ์ •ํ•ฉ ๊ธฐ์ˆ ๋กœ ํ™œ์šฉ๋  ๊ฒƒ์œผ๋กœ ์‚ฌ๋ฃŒ๋œ๋‹ค.1. ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ๋™ํ–ฅ 4 1.3 ์—ฐ๊ตฌ์˜ ๋ชฉ์  ๋ฐ ๋ฒ”์œ„ 7 2. ํŠน์ง•์  ์ถ”์ถœ 10 2.1 ์˜์ƒ ์ „์ฒ˜๋ฆฌ 10 2.2 SAR-SIFT ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ํŠน์ง•์  ์ถ”์ถœ 11 2.2.1. SIFT ๊ธฐ๋ฒ•์˜ ๋ฌธ์ œ์  11 2.2.2. SAR-SIFT ๊ธฐ๋ฒ• 15 3. ์ •ํ•ฉ์Œ ์ถ”์ถœ 18 3.1 DLSS ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ์ •ํ•ฉ์Œ ์ถ”์ถœ 18 3.1.1. ํ˜•์ƒ ์„œ์ˆ ์ž LSS 19 3.1.2. ํ˜•์ƒ ์„œ์ˆ ์ž ๋ฒกํ„ฐ DLSS 21 3.1.3. DLSS ๊ธฐ๋ฒ•์˜ ๋ฌธ์ œ์  22 3.2 ์ œ์•ˆ๋œ ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋ฐฉ๋ฒ• 24 3.2.1. ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„ 24 3.2.2. ํ›„๋ณด ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋‹จ๊ณ„ 26 3.2.3. ์ •๋ฐ€ ์ •ํ•ฉ์Œ ์ถ”์ถœ ๋‹จ๊ณ„ 28 3.3 ์ •ํ•ฉ ๋ฐ ์ •ํ™•๋„ ํ‰๊ฐ€ ๋ฐฉ๋ฒ• 29 3.3.1. ์–ดํ•€ ๋ณ€ํ™˜์‹ 29 3.3.2. ์ •ํ™•๋„ ํ‰๊ฐ€ ๋ฐฉ๋ฒ• 31 4. ์‹คํ—˜์˜ ์ ์šฉ ๋ฐ ํ‰๊ฐ€ 32 4.1 ์‹คํ—˜์ง€์—ญ ๋ฐ ์ž๋ฃŒ 32 4.2 ํŠน์ง•์  ์ถ”์ถœ ๊ฒฐ๊ณผ 35 4.2.1. SIFT ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ํŠน์ง•์  ์ถ”์ถœ ๊ฒฐ๊ณผ 35 4.2.2. SAR-SIFT ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ํŠน์ง•์  ์ถ”์ถœ ๊ฒฐ๊ณผ 37 4.3 ์ •ํ•ฉ์Œ ์ถ”์ถœ ๊ฒฐ๊ณผ 40 4.3.1. ๊ธฐ์กด ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ์ •ํ•ฉ์Œ ์ถ”์ถœ ๊ฒฐ๊ณผ 40 4.3.2. ์ œ์•ˆ ๊ธฐ๋ฒ•์„ ํ†ตํ•œ ์ •ํ•ฉ์Œ ์ถ”์ถœ ๊ฒฐ๊ณผ 44 4.4 ์ •ํ•ฉ ๊ฒฐ๊ณผ ๋ฐ ํ‰๊ฐ€ 49 5. ๊ฒฐ๋ก  55 Abstract 67Maste

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks

    On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.

    Get PDF
    Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke

    Object Recognition

    Get PDF
    Vision-based object recognition tasks are very familiar in our everyday activities, such as driving our car in the correct lane. We do these tasks effortlessly in real-time. In the last decades, with the advancement of computer technology, researchers and application developers are trying to mimic the human's capability of visually recognising. Such capability will allow machine to free human from boring or dangerous jobs

    Multi-scale texture segmentation of synthetic aperture radar images

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition
    • โ€ฆ
    corecore