3 research outputs found

    μžμœ¨μ£Όν–‰μ„ μœ„ν•œ 카메라 기반 거리 μΈ‘μ • 및 μΈ‘μœ„

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2017. 8. μ„œμŠΉμš°.Automated driving vehicles or advanced driver assistance systems (ADAS) have continued to be an important research topic in transportation area. They can promise to reduce road accidents and eliminate traffic congestions. Automated driving vehicles are composed of two parts. On-board sensors are used to observe the environments and then, the captured sensor data are processed to interpret the environments and to make appropriate driving decisions. Some sensors have already been widely used in existing driver-assistance systems, e.g., camera systems are used in lane-keeping systems to recognize lanes on roadsradars (Radio Detection And Ranging) are used in adaptive cruise systems to measure the distance to a vehicle ahead such that a safe distance can be guaranteedLIDAR (Light Detection And Ranging) sensors are used in the autonomous emergency braking system to detect other vehicles or pedestrians in the vehicle path to avoid collisionaccelerometers are used to measure vehicle speed changes, which are especially useful for air-bagswheel encoder sensors are used to measure wheel rotations in a vehicle anti-lock brake system and GPS sensors are embedded on vehicles to provide the global positions of the vehicle for path navigation. In this dissertation, we cover three important application for automated driving vehicles by using camera sensors in vehicular environments. Firstly, precise and robust distance measurement is one of the most important requirements for driving assistance systems and automated driving systems. We propose a new method for providing accurate distance measurements through a frequency-domain analysis based on a stereo camera by exploiting key information obtained from the analysis of captured images. Secondly, precise and robust localization is another important requirement for safe automated driving. We propose a method for robust localization in diverse driving situations that measures the vehicle positions using a camera with respect to a given map for vision based navigation. The proposed method includes technology for removing dynamic objects and preserving features in vehicular environments using a background model accumulated from previous frames and we improve image quality using illuminant invariance characteristics of the log-chromaticity. We also propose a vehicle localization method using structure tensor and mutual information theory. Finally, we propose a novel algorithm for estimating the drivable collision-free space for autonomous navigation of on-road vehicles. In contrast to previous approaches that use stereo cameras or LIDAR, we solve this problem using a sensor fusion of cameras and LIDAR.1 Introduction 1 1.1 Background and Motivations 1 1.2 Contributions and Outline of the Dissertation 3 1.2.1 Accurate Object Distance Estimation based on Frequency-Domain Analysis with a Stereo Camera 3 1.2.2 Visual Map Matching based on Structural Tensor and Mutual Information using 3D High Resolution Digital Map 3 1.2.3 Free Space Computation using a Sensor Fusion of LIDAR and RGB camera in Vehicular Environment 4 2 Accurate Object Distance Estimation based on Frequency-Domain Analysis with a Stereo Camera 5 2.1 Introduction 5 2.2 Related Works 7 2.3 Algrorithm Description 10 2.3.1 Overall Procedure 10 2.3.2 Preliminaries 12 2.3.3 Pre-processing 12 2.4 Frequency-domain Analysis 15 2.4.1 Procedure 15 2.4.2 Contour-based Cost Computation 20 2.5 Cost Optimization and Distance Estimation 21 2.5.1 Disparity Optimization 21 2.5.2 Post-processing and Distance Estimation 23 2.6 Experimental Results 24 2.6.1 Test Environment 24 2.6.2 Experiment on KITTI Dataset 25 2.6.3 Performance Evaluation and Analysis 28 2.7 Conclusion 32 3 Visual Map Matching Based on Structural Tensor and Mutual Information using 3D High Resolution Digital Map 33 3.1 Introduction 33 3.2 Related Work 35 3.3 Methodology 37 3.3.1 Sensor Calibration 37 3.3.2 Digital Map Generation and Synthetic View Conversion 39 3.3.3 Dynamic Object Removal 41 3.3.4 Illuminant Invariance 43 3.3.5 Visual Map Matching using Structure Tensor and Mutual Information 43 3.4 Experiments and Result 49 3.4.1 Methodology 49 3.4.2 Quantitative Results 53 3.5 Conclusions and Future Works 54 4 Free Space Computation using a Sensor Fusion of LIDAR and RGB Camera in Vehicular Environments 55 4.1 Introduction 55 4.2 Methodology 57 4.2.1 Dense Depth Map Generation 57 4.2.2 Color Distribution Entropy 58 4.2.3 Edge Extraction 60 4.2.4 Temporal Smoothness 61 4.2.5 Spatial Smoothness 62 4.3 Experiment and Evaluation 63 4.3.1 Evaluated Methods 63 4.3.2 Experiment on KITTI Dataset 64 4.4 Conclusion 68 5 Conclusion 70 Abstract (In Korean) 87Docto

    Similarity Measure Based on Entropy and Census and Multi-Resolution Disparity Estimation Technique for Stereo Matching

    Get PDF
    Stereo matching is one of the most active research areas in the field of computer vision. Stereo matching aims to obtain 3D information by extracting correct correspondence between two images captured from different point of views. There are two research parts in stereo matching: similarity measure between correspondence points and optimization technique for dence disparity estimation. The crux of stereo matching problem in similarity measure perspective is how to deal with the inferent points ambiguity that results from the ambiguous local appearances of image points. Similarity measures in stereo matching are classified as feature-based, intensity-based or non-parametric measure. And most similarity measures in the literatures are based on pixel intensity comparison. When images are taken at different illumination conditions or different sensors used, it is very unlikely that the corresponding pixels would have the same intensity creating false correspondences if it is only based on intensity matching functions alone. Especially illumination variations between input images can cause serious degrade in the performance of stereo matching algorithms. In this situation, mutual information-based method is powerful. However, it is still ambiguous or erroneous in considering local illumination variations between images. Therefore, similarity measure to these radiometric variations are demanded and become inevitable for stereo matching. Optimization method in stereo matching can be classified into two categories: local and global optimization methods, and most state-of-the-art algorithms fall into global optimization method. Global optimization methods can greatly suppress the matching ambiguities caused by various factors such as occluded and textureless regions. However, They are usually computationally expensive due to the slow-converging optimization process. In this paper, it was proposed that a stereo matching similarity measure based on entropy and census transform and an optimization technique using dynamic programming to estimate disparity efficiently based on multi-resolution method. Proposed similarity measure is composed of entropy, Haar wavelet feature vector, and modified Census transform. In general, mutual information similarity measure based on entropy about stereo images and disparity map is a popular and powerful similarity measure which is robust to complex intensity transformation. However, it is still ambiguous or erroneous with local radiometric variations, since it only accounts for global variation between images, and does not contain spatial information. Haar wavelet response can express frequency properties of image regions and is robust to various intensity changes and bias. Therefore, entropy was utilized with Haar wavelet feature vector as geometric measure. Modified Census transform was used as another spatial similarity measure. Census transform is a well-known non-parametric measure. And it is powerful to textureless and disparity discontinuity region and robust to noisy environment. A combination of entropy with Haar wavelet feature vector and modified Census transform as similarity measure was proposed to find correspondence. It is invariant to local radiometric variations and global illumination changes, so it can be applied to find correspondence for images which undergo local as well as global radiometric variations. Proposed optimization method is a new disparity estimation technique based on dynamic programming. A method using dynamic programming with 8-direction energy aggregation to estimate accurate disparity map was applied. Using 8-direction energy aggregation, accurate disparities can be found at disparity discontinuous region and suppress a streaking phenomenon in disparity map. Finally, the multi-resolution scheme was proposed to increase efficiency while processing and disparity estimation method. A Gaussian pyramid which prevent the ailasing at low-resolution image pyramid levels was used. And the multi-resolution scheme was proposed to perform matching at every levels to find accurate disparity. This method can perform matching efficiently and make accurate disparity map. And proposed method was validated with experimental results on stereo images.제 1 μž₯ μ„œλ‘ ...........................................1 1.1 연ꡬ λͺ©μ  및 λ°°κ²½...............................1 1.2κ΄€λ ¨ 연ꡬ........................................3 1.3연ꡬ λ‚΄μš©........................................6 1.4λ…Όλ¬Έμ˜ ꡬ성......................................7 제 2 μž₯ μŠ€ν…Œλ ˆμ˜€ μ‹œκ°κ³Ό μŠ€ν…Œλ ˆμ˜€ μ •ν•©..................8 2.1 μŠ€ν…Œλ ˆμ˜€ μ‹œκ°...................................8 2.2μŠ€ν…Œλ ˆμ˜€μ •ν•©....................................10 2.2.1 μœ μ‚¬λ„ 척도.................................10 2.2.2 μ΅œμ ν™” 방법.................................15 2.3 ν™˜κ²½ 변화에 κ°•μΈν•œ μœ μ‚¬λ„ 척도.................18 2.3.1 νŠΉμ§• 기반 μœ μ‚¬λ„ 척도.......................18 2.3.2 λͺ…암도 기반 μœ μ‚¬λ„ 척도.....................19 2.3.3 λΉ„λͺ¨μˆ˜ μœ μ‚¬λ„ 척도..........................22 제 3 μž₯ μ—”νŠΈλ‘œν”Ό 및 Census 기반의 μœ μ‚¬λ„ 척도.........24 3.1 μ—”νŠΈλ‘œν”Ό 기반의 μœ μ‚¬λ„ 척도....................26 3.1.1 μ—”νŠΈλ‘œν”Ό....................................26 3.1.2 μ—”νŠΈλ‘œν”Όλ₯Ό μ΄μš©ν•œ MI μœ μ‚¬λ„ 척도............27 3.2 μ œμ•ˆν•œ Haar 웨이블렛 νŠΉμ§•μ„ κ²°ν•©ν•œ μ—”νŠΈλ‘œν”Ό μœ μ‚¬λ„μ²™λ„.................................................28 3.2.1 ν™”μ†Œ λ‹¨μœ„ μ—”νŠΈλ‘œν”Ό.........................28 3.2.2 Haar웨이블렛 νŠΉμ§•μ„ κ²°ν•©ν•œ μ—”νŠΈλ‘œν”Ό........35 3.3 μ œμ•ˆν•œCensus λ³€ν™˜ 기반의 μœ μ‚¬λ„ 척도..........46 3.3.1 Census λ³€ν™˜................................46 3.3.2 μ œμ•ˆν•œ Census λ³€ν™˜μ„ μ΄μš©ν•œ μœ μ‚¬λ„ 척도....49 제 4 μž₯ 8λ°©ν–₯ 동적 κ³„νšλ²•μ„ μ΄μš©ν•œ λ³€μœ„μΆ”μ •..........53 4.1 동적 κ³„νšλ²•...................................53 4.2 μ œμ•ˆν•œ 8λ°©ν–₯ 동적 κ³„νšλ²•......................57 제 5 μž₯ 닀해상도 기반의 μŠ€ν…Œλ ˆμ˜€ μ •ν•©................67 5.1 κ°€μš°μ‹œμ•ˆ μ˜μƒ ν”ΌλΌλ―Έλ“œ........................67 5.2 μ œμ•ˆν•œ 닀해상도 기반 μŠ€ν…Œλ ˆμ˜€ μ •ν•©............71 제 6 μž₯ μ‹€ν—˜ 및 κ³ μ°°.................................77 6.1 μ •ν•© μ„±λŠ₯ 평가 방법...........................77 6.2 μŠ€ν…Œλ ˆμ˜€ μ •ν•© μ‹€ν—˜............................79 6.2.1 RDS μ˜μƒ μ‹€ν—˜..............................79 6.2.2 ν™˜κ²½ λ³€ν™”κ°€ μ—†λŠ” ν‘œμ€€ μ˜μƒ μ‹€ν—˜............84 6.2.3 ν™˜κ²½ λ³€ν™”κ°€ λ°œμƒν•œ ν‘œμ€€ μ˜μƒ μ‹€ν—˜..........92 6.2.4 μ‹€μ œ μ˜μƒ μ‹€ν—˜............................110 6.3 계산 속도....................................118 6.4 μ œμ•ˆν•œ λ°©λ²•μ˜ μ •ν•© μ„±λŠ₯에 λŒ€ν•œ κ³ μ°°..........121 제 7 μž₯ κ²° λ‘ .....................................123 μ°Έκ³  λ¬Έν—Œ...........................................12
    corecore