5 research outputs found

    μ •λ ¬ νŠΉμ„±λ“€ 기반의 λ¬Έμ„œ 및 μž₯λ©΄ ν…μŠ€νŠΈ μ˜μƒ ν‰ν™œν™” 기법

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2017. 8. 쑰남읡.μΉ΄λ©”λΌλ‘œ μ΄¬μ˜ν•œ ν…μŠ€νŠΈ μ˜μƒμ— λŒ€ν•΄μ„œ, κ΄‘ν•™ 문자 인식(OCR)은 촬영된 μž₯면을 λΆ„μ„ν•˜λŠ”λ° μžˆμ–΄μ„œ 맀우 μ€‘μš”ν•˜λ‹€. ν•˜μ§€λ§Œ μ˜¬λ°”λ₯Έ ν…μŠ€νŠΈ μ˜μ—­ κ²€μΆœ 후에도, μ΄¬μ˜ν•œ μ˜μƒμ— λŒ€ν•œ 문자 인식은 μ—¬μ „νžˆ μ–΄λ €μš΄ 문제둜 여겨진닀. μ΄λŠ” μ’…μ΄μ˜ κ΅¬λΆ€λŸ¬μ§κ³Ό 카메라 μ‹œμ μ— μ˜ν•œ κΈ°ν•˜ν•™μ μΈ μ™œκ³‘ λ•Œλ¬Έμ΄κ³ , λ”°λΌμ„œ μ΄λŸ¬ν•œ ν…μŠ€νŠΈ μ˜μƒμ— λŒ€ν•œ ν‰ν™œν™”λŠ” 문자 인식에 μžˆμ–΄μ„œ ν•„μˆ˜μ μΈ μ „μ²˜λ¦¬ κ³Όμ •μœΌλ‘œ 여겨진닀. 이λ₯Ό μœ„ν•œ μ™œκ³‘λœ 촬영 μ˜μƒμ„ μ •λ©΄ μ‹œμ μœΌλ‘œ λ³΅μ›ν•˜λŠ” ν…μŠ€νŠΈ μ˜μƒ ν‰ν™œν™” 방법듀은 ν™œλ°œνžˆ μ—°κ΅¬λ˜μ–΄μ§€κ³  μžˆλ‹€. μ΅œκ·Όμ—λŠ”, ν‰ν™œν™”κ°€ 잘 된 ν…μŠ€νŠΈμ˜ μ„±μ§ˆμ— μ΄ˆμ μ„ 맞좘 연ꡬ듀이 주둜 μ§„ν–‰λ˜κ³  μžˆλ‹€. μ΄λŸ¬ν•œ κ΄€μ μ—μ„œ, λ³Έ ν•™μœ„ 논문은 ν…μŠ€νŠΈ μ˜μƒ ν‰ν™œν™”λ₯Ό μœ„ν•˜μ—¬ μƒˆλ‘œμš΄ μ •λ ¬ νŠΉμ„±λ“€μ„ 닀룬닀. μ΄λŸ¬ν•œ μ •λ ¬ νŠΉμ„±λ“€μ€ λΉ„μš© ν•¨μˆ˜λ‘œ μ„€κ³„λ˜μ–΄μ§€κ³ , λΉ„μš© ν•¨μˆ˜λ₯Ό μ΅œμ†Œν™”ν•˜λŠ” 방법을 ν†΅ν•΄μ„œ ν‰ν™œν™”μ— μ‚¬μš©λ˜μ–΄μ§€λŠ” ν‰ν™œν™” λ³€μˆ˜λ“€μ΄ ꡬ해진닀. λ³Έ ν•™μœ„ 논문은 λ¬Έμ„œ μ˜μƒ ν‰ν™œν™”, μž₯λ©΄ ν…μŠ€νŠΈ ν‰ν™œν™”, 일반 λ°°κ²½ μ†μ˜ νœ˜μ–΄μ§„ ν‘œλ©΄ ν‰ν™œν™”μ™€ 같이 3가지 μ„ΈλΆ€ 주제둜 λ‚˜λˆ μ§„λ‹€. 첫 번째둜, λ³Έ ν•™μœ„ 논문은 ν…μŠ€νŠΈ 라인듀과 μ„ λΆ„λ“€μ˜ μ •λ ¬ νŠΉμ„±μ— 기반의 λ¬Έμ„œ μ˜μƒ ν‰ν™œν™” 방법을 μ œμ•ˆν•œλ‹€. 기쑴의 ν…μŠ€νŠΈ 라인 기반의 λ¬Έμ„œ μ˜μƒ ν‰ν™œν™” λ°©λ²•λ“€μ˜ 경우, λ¬Έμ„œκ°€ λ³΅μž‘ν•œ λ ˆμ΄μ•„μ›ƒ ν˜•νƒœμ΄κ±°λ‚˜ 적은 수의 ν…μŠ€νŠΈ 라인을 ν¬ν•¨ν•˜κ³  μžˆμ„ λ•Œ λ¬Έμ œκ°€ λ°œμƒν•œλ‹€. μ΄λŠ” λ¬Έμ„œμ— ν…μŠ€νŠΈ λŒ€μ‹  κ·Έλ¦Ό, κ·Έλž˜ν”„ ν˜Ήμ€ ν‘œμ™€ 같은 μ˜μ—­μ΄ λ§Žμ€ κ²½μš°μ΄λ‹€. λ”°λΌμ„œ λ ˆμ΄μ•„μ›ƒμ— κ°•μΈν•œ λ¬Έμ„œ μ˜μƒ ν‰ν™œν™”λ₯Ό μœ„ν•˜μ—¬ μ œμ•ˆν•˜λŠ” 방법은 μ •λ ¬λœ ν…μŠ€νŠΈ 라인뿐만 μ•„λ‹ˆλΌ 선뢄듀도 μ΄μš©ν•œλ‹€. μ˜¬λ°”λ₯΄κ²Œ ν‰ν™œν™” 된 선뢄듀은 μ—¬μ „νžˆ μΌμ§μ„ μ˜ ν˜•νƒœμ΄κ³ , λŒ€λΆ€λΆ„ κ°€λ‘œ ν˜Ήμ€ μ„Έλ‘œ λ°©ν–₯으둜 μ •λ ¬λ˜μ–΄ μžˆλ‹€λŠ” κ°€μ • 및 관츑에 κ·Όκ±°ν•˜μ—¬, μ œμ•ˆν•˜λŠ” 방법은 μ΄λŸ¬ν•œ μ„±μ§ˆλ“€μ„ μˆ˜μ‹ν™”ν•˜κ³  이λ₯Ό ν…μŠ€νŠΈ 라인 기반의 λΉ„μš© ν•¨μˆ˜μ™€ κ²°ν•©ν•œλ‹€. 그리고 λΉ„μš© ν•¨μˆ˜λ₯Ό μ΅œμ†Œν™” ν•˜λŠ” 방법을 톡해, μ œμ•ˆν•˜λŠ” 방법은 μ’…μ΄μ˜ κ΅¬λΆ€λŸ¬μ§, 카메라 μ‹œμ , 초점 거리와 같은 ν‰ν™œν™” λ³€μˆ˜λ“€μ„ μΆ”μ •ν•œλ‹€. λ˜ν•œ, μ˜€κ²€μΆœλœ ν…μŠ€νŠΈ 라인듀과 μž„μ˜μ˜ λ°©ν–₯을 κ°€μ§€λŠ” μ„ λΆ„λ“€κ³Ό 같은 이상점(outlier)을 κ³ λ €ν•˜μ—¬, μ œμ•ˆν•˜λŠ” 방법은 반볡적인 λ‹¨κ³„λ‘œ μ„€κ³„λœλ‹€. 각 λ‹¨κ³„μ—μ„œ, μ •λ ¬ νŠΉμ„±μ„ λ§Œμ‘±ν•˜μ§€ μ•ŠλŠ” 이상점듀은 제거되고, μ œκ±°λ˜μ§€ μ•Šμ€ ν…μŠ€νŠΈ 라인 및 μ„ λΆ„λ“€λ§Œμ΄ λΉ„μš©ν•¨μˆ˜ μ΅œμ ν™”μ— μ΄μš©λœλ‹€. μˆ˜ν–‰ν•œ μ‹€ν—˜ 결과듀은 μ œμ•ˆν•˜λŠ” 방법이 λ‹€μ–‘ν•œ λ ˆμ΄μ•„μ›ƒμ— λŒ€ν•˜μ—¬ 강인함을 보여쀀닀. 두 λ²ˆμ§Έλ‘œλŠ”, λ³Έ 논문은 μž₯λ©΄ ν…μŠ€νŠΈ ν‰ν™œν™” 방법을 μ œμ•ˆν•œλ‹€. κΈ°μ‘΄ μž₯λ©΄ ν…μŠ€νŠΈ ν‰ν™œν™” λ°©λ²•λ“€μ˜ 경우, κ°€λ‘œ/μ„Έλ‘œ λ°©ν–₯의 획, λŒ€μΉ­ ν˜•νƒœμ™€ 같은 λ¬Έμžκ°€ κ°€μ§€λŠ” 고유의 μƒκΉ€μƒˆμ— κ΄€λ ¨λœ νŠΉμ„±μ„ μ΄μš©ν•œλ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ 방법듀은 λ¬Έμžλ“€μ˜ μ •λ ¬ ν˜•νƒœλŠ” κ³ λ €ν•˜μ§€ μ•Šκ³ , 각각 κ°œλ³„ λ¬Έμžμ— λŒ€ν•œ νŠΉμ„±λ“€λ§Œμ„ μ΄μš©ν•˜κΈ° λ•Œλ¬Έμ— μ—¬λŸ¬ λ¬Έμžλ“€λ‘œ κ΅¬μ„±λœ ν…μŠ€νŠΈμ— λŒ€ν•΄μ„œ 잘 μ •λ ¬λ˜μ§€ μ•Šμ€ κ²°κ³Όλ₯Ό 좜λ ₯ν•œλ‹€. μ΄λŸ¬ν•œ λ¬Έμ œμ μ„ ν•΄κ²°ν•˜κΈ° μœ„ν•˜μ—¬, μ œμ•ˆν•˜λŠ” 방법은 λ¬Έμžλ“€μ˜ μ •λ ¬ 정보λ₯Ό μ΄μš©ν•œλ‹€. μ •ν™•ν•˜κ²ŒλŠ”, 문자 고유의 λͺ¨μ–‘λΏλ§Œ μ•„λ‹ˆλΌ μ •λ ¬ νŠΉμ„±λ“€λ„ ν•¨κ»˜ λΉ„μš©ν•¨μˆ˜λ‘œ μˆ˜μ‹ν™”λ˜κ³ , λΉ„μš©ν•¨μˆ˜λ₯Ό μ΅œμ†Œν™”ν•˜λŠ” 방법을 ν†΅ν•΄μ„œ ν‰ν™œν™”κ°€ μ§„ν–‰λœλ‹€. λ˜ν•œ, λ¬Έμžλ“€μ˜ μ •λ ¬ νŠΉμ„±μ„ μˆ˜μ‹ν™”ν•˜κΈ° μœ„ν•˜μ—¬, μ œμ•ˆν•˜λŠ” 방법은 ν…μŠ€νŠΈλ₯Ό 각각 κ°œλ³„ λ¬Έμžλ“€λ‘œ λΆ„λ¦¬ν•˜λŠ” 문자 뢄리 λ˜ν•œ μˆ˜ν–‰ν•œλ‹€. κ·Έ λ’€, ν…μŠ€νŠΈμ˜ μœ„, μ•„λž˜ 선듀을 RANSAC μ•Œκ³ λ¦¬μ¦˜μ„ μ΄μš©ν•œ μ΅œμ†Œ μ œκ³±λ²•μ„ 톡해 μΆ”μ •ν•œλ‹€. 즉, 전체 μ•Œκ³ λ¦¬μ¦˜μ€ 문자 뢄리와 μ„  μΆ”μ •, ν‰ν™œν™”κ°€ 반볡적으둜 μˆ˜ν–‰λœλ‹€. μ œμ•ˆν•˜λŠ” λΉ„μš©ν•¨μˆ˜λŠ” 볼둝(convex)ν˜•νƒœκ°€ μ•„λ‹ˆκ³  λ˜ν•œ λ§Žμ€ λ³€μˆ˜λ“€μ„ ν¬ν•¨ν•˜κ³  있기 λ•Œλ¬Έμ—, 이λ₯Ό μ΅œμ ν™”ν•˜κΈ° μœ„ν•˜μ—¬ Augmented Lagrange Multiplier 방법을 μ΄μš©ν•œλ‹€. μ œμ•ˆν•˜λŠ” 방법은 일반 촬영 μ˜μƒκ³Ό ν•©μ„±λœ ν…μŠ€νŠΈ μ˜μƒμ„ 톡해 μ‹€ν—˜μ΄ μ§„ν–‰λ˜μ—ˆκ³ , μ‹€ν—˜ 결과듀은 μ œμ•ˆν•˜λŠ” 방법이 κΈ°μ‘΄ 방법듀에 λΉ„ν•˜μ—¬ 높은 인식 μ„±λŠ₯을 λ³΄μ΄λ©΄μ„œ λ™μ‹œμ— μ‹œκ°μ μœΌλ‘œλ„ 쒋은 κ²°κ³Όλ₯Ό λ³΄μž„μ„ 보여쀀닀. λ§ˆμ§€λ§‰μœΌλ‘œ, μ œμ•ˆν•˜λŠ” 방법은 일반 λ°°κ²½ μ†μ˜ νœ˜μ–΄μ§„ ν‘œλ©΄ ν‰ν™œν™” λ°©λ²•μœΌλ‘œλ„ ν™•μž₯λœλ‹€. 일반 배경에 λŒ€ν•΄μ„œ, μ•½λ³‘μ΄λ‚˜ 음료수 μΊ”κ³Ό 같이 원톡 ν˜•νƒœμ˜ λ¬Όμ²΄λŠ” 많이 μ‘΄μž¬ν•œλ‹€. κ·Έλ“€μ˜ ν‘œλ©΄μ€ 일반 원톡 ν‘œλ©΄(GCS)으둜 λͺ¨λΈλ§μ΄ κ°€λŠ₯ν•˜λ‹€. μ΄λŸ¬ν•œ νœ˜μ–΄μ§„ ν‘œλ©΄λ“€μ€ λ§Žμ€ λ¬Έμžμ™€ 그림듀을 ν¬ν•¨ν•˜κ³  μžˆμ§€λ§Œ, ν¬ν•¨λœ λ¬ΈμžλŠ” λ¬Έμ„œμ— λΉ„ν•΄μ„œ 맀우 λΆˆκ·œμΉ™μ μΈ ꡬ쑰λ₯Ό 가지고 μžˆλ‹€. λ”°λΌμ„œ 기쑴의 λ¬Έμ„œ μ˜μƒ ν‰ν™œν™” λ°©λ²•λ“€λ‘œλŠ” 일반 λ°°κ²½ 속 νœ˜μ–΄μ§„ ν‘œλ©΄ μ˜μƒμ„ ν‰ν™œν™”ν•˜κΈ° νž˜λ“€λ‹€. λ§Žμ€ νœ˜μ–΄μ§„ ν‘œλ©΄μ€ 잘 μ •λ ¬λœ μ„ λΆ„λ“€ (ν…Œλ‘λ¦¬ μ„  ν˜Ήμ€ λ°”μ½”λ“œ)을 ν¬ν•¨ν•˜κ³  μžˆλ‹€λŠ” 관츑에 κ·Όκ±°ν•˜μ—¬, μ œμ•ˆν•˜λŠ” 방법은 μ•žμ„œ μ œμ•ˆν•œ 선뢄듀에 λŒ€ν•œ ν•¨μˆ˜λ₯Ό μ΄μš©ν•˜μ—¬ νœ˜μ–΄μ§„ ν‘œλ©΄μ„ ν‰ν™œν™”ν•œλ‹€. λ‹€μ–‘ν•œ λ‘₯κ·Ό 물체의 νœ˜μ–΄μ§„ ν‘œλ©΄ μ˜μƒλ“€μ— λŒ€ν•œ μ‹€ν—˜ 결과듀은 μ œμ•ˆν•˜λŠ” 방법이 ν‰ν™œν™”λ₯Ό μ •ν™•ν•˜κ²Œ μˆ˜ν–‰ν•¨μ„ 보여쀀닀.The optical character recognition (OCR) of text images captured by cameras plays an important role for scene understanding. However, the OCR of camera-captured image is still considered a challenging problem, even after the text detection (localization). It is mainly due to the geometric distortions caused by page curve and perspective view, therefore their rectification has been an essential pre-processing step for their recognition. Thus, there have been many text image rectification methods which recover the fronto-parallel view image from a single distorted image. Recently, many researchers have focused on the properties of the well-rectified text. In this respect, this dissertation presents novel alignment properties for text image rectification, which are encoded into the proposed cost functions. By minimizing the cost functions, the transformation parameters for rectification are obtained. In detail, they are applied to three topics: document image dewarping, scene text rectification, and curved surface dewarping in real scene. First, a document image dewarping method is proposed based on the alignments of text-lines and line segments. Conventional text-line based document dewarping methods have problems when handling complex layout and/or very few text-lines. When there are few aligned text-lines in the image, this usually means that photos, graphics and/or tables take large portion of the input instead. Hence, for the robust document dewarping, the proposed method uses line segments in the image in addition to the aligned text-lines. Based on the assumption and observation that all the transformed line segments are still straight (line to line mapping), and many of them are horizontally or vertically aligned in the well-rectified images, the proposed method encodes this properties into the cost function in addition to the text-line based cost. By minimizing the function, the proposed method can obtain transformation parameters for page curve, camera pose, and focal length, which are used for document image rectification. Considering that there are many outliers in line segment directions and miss-detected text-lines in some cases, the overall algorithm is designed in an iterative manner. At each step, the proposed method removes the text-lines and line segments that are not well aligned, and then minimizes the cost function with the updated information. Experimental results show that the proposed method is robust to the variety of page layouts. This dissertation also presents a method for scene text rectification. Conventional methods for scene text rectification mainly exploited the glyph property, which means that the characters in many language have horizontal/vertical strokes and also some symmetric shapes. However, since they consider the only shape properties of individual character, without considering the alignments of characters, they work well for only images with a single character, and still yield mis-aligned results for images with multiple characters. In order to alleviate this problem, the proposed method explicitly imposes alignment constraints on rectified results. To be precise, character alignments as well as glyph properties are encoded in the proposed cost function, and the transformation parameters are obtained by minimizing the function. Also, in order to encode the alignments of characters into the cost function, the proposed method separates the text into individual characters using a projection profile method before optimizing the cost function. Then, top and bottom lines are estimated using a least squares line fitting with RANSAC. Overall algorithm is designed to perform character segmentation, line fitting, and rectification iteratively. Since the cost function is non-convex and many variables are involved in the function, the proposed method also develops an optimization method using Augmented Lagrange Multiplier method. This dissertation evaluates the proposed method on real and synthetic text images and experimental results show that the proposed method achieves higher OCR accuracy than the conventional approach and also yields visually pleasing results. Finally, the proposed method can be extended to the curved surface dewarping in real scene. In real scene, there are many circular objects such as medicine bottles or cans of drinking water, and their curved surfaces can be modeled as Generalized Cylindrical Surfaces (GCS). These curved surfaces include many significant text and figures, however their text has irregular structure compared to documents. Therefore, the conventional dewarping methods based on the properties of well-rectified text have problems in their rectification. Based on the observation that many curved surfaces include well-aligned line segments (boundary lines of objects or barcode), the proposed method rectifies the curved surfaces by exploiting the proposed line segment terms. Experimental results on a range of images with curved surfaces of circular objects show that the proposed method performs rectification robustly.1 Introduction 1 1.1 Document image dewarping 3 1.2 Scene text rectification 5 1.3 Curved surface dewarping in real scene 7 1.4 Contents 8 2 Related work 9 2.1 Document image dewarping 9 2.1.1 Dewarping methods using additional information 9 2.1.2 Text-line based dewarping methods 10 2.2 Scene text rectification 11 2.3 Curved surface dewarping in real scene 12 3 Document image dewarping 15 3.1 Proposed cost function 15 3.1.1 Parametric model of dewarping process 15 3.1.2 Cost function design 18 3.1.3 Line segment properties and cost function 19 3.2 Outlier removal and optimization 26 3.2.1 Jacobian matrix of the proposed cost function 27 3.3 Document region detection and dewarping 31 3.4 Experimental results 32 3.4.1 Experimental results on text-abundant document images 33 3.4.2 Experimental results on non conventional document images 34 3.5 Summary 47 4 Scene text rectification 49 4.1 Proposed cost function for rectification 49 4.1.1 Cost function design 49 4.1.2 Character alignment properties and alignment terms 51 4.2 Overall algorithm 54 4.2.1 Initialization 55 4.2.2 Character segmentation 56 4.2.3 Estimation of the alignment parameters 57 4.2.4 Cost function optimization for rectification 58 4.3 Experimental results 63 4.4 Summary 66 5 Curved surface dewarping in real scene 73 5.1 Proposed curved surface dewarping method 73 5.1.1 Pre-processing 73 5.1 Experimental results 74 5.2 Summary 76 6 Conclusions 83 Bibliography 85 Abstract (Korean) 93Docto

    Experimental Investigation of Pollutant Dispersion over Hilly Terrain

    No full text
    Maste

    닀크 채널 프라이어와 λŒ€λΉ„ ν–₯상을 κ²°ν•©ν•œ 단일 μ˜μƒ μ•ˆκ°œ 제거 기법

    No full text
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : 전기·컴퓨터곡학뢀, 2013. 2. 쑰남읡.μ•ˆκ°œκ°€ λ‚€ μž₯면을 찍은 μ˜μƒμ—μ„œ μ•ˆκ°œλ₯Ό μ œκ±°ν•˜λŠ” 기법은 μ΅œκ·Όμ— λ§Žμ€ λ°œμ „μ΄ μžˆμ—ˆλ‹€. μ•ˆκ°œ μ˜μƒμ˜ 경우 μ˜μƒ 속 물체의 λ°κΈ°λ‚˜ 색상이 μ™œκ³‘λ˜μ–΄ 객체 탐지, 좔적, 인식과 같은 컴퓨터 λΉ„μ „ μ•Œκ³ λ¦¬μ¦˜μ˜ μ„±λŠ₯을 μ €ν•˜μ‹œν‚¨λ‹€. λ”°λΌμ„œ μ—¬λŸ¬ λΉ„μ „ μ•Œκ³ λ¦¬μ¦˜μ˜ μ„±λŠ₯을 κ°œμ„ ν•˜κΈ° μœ„ν•œ μ „μ²˜λ¦¬ κ³Όμ • (pre-processing) μœΌλ‘œμ„œ μ•ˆκ°œλ₯Ό μ œκ±°ν•˜λŠ” 연ꡬ가 많이 μˆ˜ν–‰λ˜μ–΄ μ™”λ‹€. κΈ°μ‘΄μ—λŠ” 주둜 μ•ˆκ°œκ°€ λ‚€ μ˜μƒμ˜ 지역 λŒ€λΉ„ κ°’ (local contrast) 을 μ΅œλŒ€ν™”ν•˜λŠ” 방법을 ν†΅ν•΄μ„œ μ•ˆκ°œ 제거 방법을 μˆ˜ν–‰ν–ˆμœΌλ‚˜ μ΄λŸ¬ν•œ 방법듀은 μ•ˆκ°œκ°€ 끼지 μ•Šμ€ μ˜μƒμœΌλ‘œμ˜ 볡원 (image restoration) κ°œλ…μ΄ μ•„λ‹Œ, μ˜μƒ κ°œμ„  (image enhancement) 의 κ°œλ…μ΄κΈ° λ•Œλ¬Έμ— 본래 μ˜μƒκ³ΌλŠ” λ‹€λ₯΄κ²Œ 색상 μ™œκ³‘ ν˜„μƒμ΄ μΌμ–΄λ‚œλ‹€λŠ” 단점을 가지고 μžˆμ—ˆλ‹€. λ”°λΌμ„œ μƒˆλ‘œμš΄ λŒ€μ•ˆμœΌλ‘œ 닀크 채널 프라이어 (dark channel prior) μ•Œκ³ λ¦¬μ¦˜μ΄ μ œμ•ˆλ˜μ—ˆλ‹€. 이 방법은 μ•ˆκ°œκ°€ 끼지 μ•Šμ€ μ˜μƒμ˜ 톡계적 관츑을 ν†΅ν•œ μ˜μƒ 볡원 (image restoration) 의 κ°œλ…μ΄λ‹€. λ”°λΌμ„œ dark channel prior 방법은 색상 μ™œκ³‘μ΄ μ λ‹€λŠ” μž₯점을 κ°€μ§€μ§€λ§Œ 이것 μ—­μ‹œ μ•žμ˜ local contrast enhancement 방법에 λΉ„ν•΄ μ•ˆκ°œ 제거 νš¨κ³Όκ°€ μ‹œκ°μ μœΌλ‘œ λšœλ ·ν•˜μ§€ μ•Šλ‹€λŠ” λ¬Έμ œμ μ„ 가지고 μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μ‹œκ°μ μœΌλ‘œ μ•ˆκ°œ 제거 νš¨κ³Όκ°€ λšœλ ·ν•˜λ©΄μ„œ λ™μ‹œμ— μžμ—°μŠ€λŸ¬μš΄ 색상을 μ–»λŠ” κ²°κ³Όλ₯Ό μ–»λŠ” 방법을 μ œμ•ˆν•œλ‹€. ꡬ체적으둜, dark channel prior 쑰건과 local contrast enhancement 쑰건을 λ™μ‹œμ— μ μš©ν•¨μœΌλ‘œμ¨ ν•œ 가지 μ‘°κ±΄λ§Œμ„ μ‚¬μš©ν–ˆμ„ λ•Œ 보닀 더 μ •ν™•ν•œ μ•ˆκ°œ μ „λ‹¬λŸ‰ (transmission)을 μΆ”μ •ν•˜κ³ , 이λ₯Ό μ΄μš©ν•˜μ—¬ ν•œ μž₯의 μ˜μƒμ—μ„œ μ•ˆκ°œλ₯Ό μ œκ±°ν•˜λŠ” μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. 이λ₯Ό μœ„ν•˜μ—¬ μ˜μƒ λ‚΄ 각 νŒ¨μΉ˜λ§ˆλ‹€ μ•ˆκ°œ μ „λ‹¬λŸ‰μ„ λ³€μˆ˜λ‘œ ν•˜λŠ” dark channel prior ν•¨μˆ˜μ™€ local contrast ν•¨μˆ˜λ₯Ό μ •μ˜ν•œλ‹€. λ˜ν•œ 색상 μ™œκ³‘μ΄ 적은 dark channel prior κ²°κ³Όμ™€μ˜ 색상 (hue) 차이 값을 두 ν•¨μˆ˜μ˜ κ°€μ€‘μΉ˜λ‘œ μ„€μ •ν•˜μ—¬ 이 두 ν•¨μˆ˜μ˜ 가쀑 합을 μ΅œμ†Œν™”ν•˜λŠ” 졜적의 μ•ˆκ°œ μ „λ‹¬λž‘μ„ μΆ”μ •ν•œλ‹€. μ •λ¦¬ν•˜λ©΄, μ œμ•ˆν•˜λŠ” λ°©λ²•μ—μ„œλŠ” 색상 μ™œκ³‘μ„ μ˜λ―Έν•˜λŠ” κ°€μ€‘μΉ˜μ™€ dark channel prior ν•¨μˆ˜, local contrast enhancement ν•¨μˆ˜λ₯Ό λ™μ‹œμ— μ‚¬μš©ν•¨μœΌλ‘œμ¨ 색상 μ™œκ³‘ ν˜„μƒμ΄ μ μœΌλ©΄μ„œ λ™μ‹œμ— μ‹œκ°μ μœΌλ‘œλ„ μ„ λͺ…ν•œ κ²°κ³Ό μ˜μƒμ„ 얻을 수 μžˆλ‹€. λ‹€μ–‘ν•œ μ˜μƒμ— λŒ€ν•œ μ‹€ν—˜μ„ ν†΅ν•˜μ—¬ μ œμ•ˆν•˜λŠ” 방법이 기쑴의 방법과 λΉ„κ΅ν•˜μ—¬ 색상 μ™œκ³‘μ€ μ μœΌλ©΄μ„œ, μ˜μƒμ˜ λŒ€λΉ„λ₯Ό κ°œμ„ ν•˜λŠ” 효과λ₯Ό 보여주며, μ•žμ„œ μ–ΈκΈ‰λœ κΈ°μ‘΄ λ°©λ²•μ˜ 단점을 보완할 수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€μ œ 1 μž₯ μ„œλ‘  1 제 1 절 μ—°κ΅¬μ˜ λ°°κ²½ 1 제 2 절 μ—°κ΅¬μ˜ λͺ©μ κ³Ό λ‚΄μš© 3 제 3 절 λ…Όλ¬Έμ˜ ꡬ성 4 제 2 μž₯ κΈ°μ‘΄ 연ꡬ 5 제 1 절 μ—¬λŸ¬ μž₯의 μ˜μƒμ„ μ΄μš©ν•œ 방법 5 제 2 절 단일 μ˜μƒμ„ μ΄μš©ν•œ 방법 6 2.1 Local contrast enhancement 기반의 방법 9 2.2 Dark channel prior 기반의 방법 15 2.3 κΈ°μ‘΄ λ°©λ²•μ˜ ν•œκ³„μ  18 제 3 절 기타 방법듀 20 제 3 μž₯ μ œμ•ˆν•˜λŠ” 방법 22 제 1 절 μ•ˆκ°œ κ°’ κ΅¬ν•˜κΈ° 25 제 2 절 가쀑 ν•¨μˆ˜ 섀계 26 제 3 절 κ°€μ€‘μΉ˜ 및 λΉ„μš© ν•¨μˆ˜ 섀계 27 제 4 절 μ •λ ¨ 방법 및 μ•ˆκ°œ 제거 32 제 4 μž₯ μ‹€ν—˜ 및 κ²°κ³Ό 36 제 5 μž₯ κ²°λ‘  53Maste
    corecore