90,059 research outputs found

    심측 신경망 검색 기법을 μ‚¬μš©ν•œ 이미지 볡원

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2021.8. μ•ˆμ€€μ˜.Image restoration is an important technology which can be used as a pre-processing step to increase the performances of various vision tasks. Image super-resolution is one of the important task in image restoration which restores a high-resolution (HR) image from low-resolution (LR) observation. The recent progress of deep convolutional neural networks has enabled great success in single image super-resolution (SISR). its performance is also being increased by deepening the networks and developing more sophisticated network structures. However, finding an optimal structure for the given problem is a difficult task, even for human experts. For this reason, neural architecture search (NAS) methods have been introduced, which automate the procedure of constructing the structures. In this dissertation, I propose a new single image super-resolution framework by using neural architecture search (NAS) method. As the performance improves, the network becomes more complex and deeper, so I apply NAS algorithm to find the optimal network while reducing the effort in network design. In detail, the proposed scheme is summarized to three topics: image super-resolution using efficient neural architecture search, multi-branch neural architecture search for lightweight image super-resolution, and neural architecture search for image super-resolution using meta-transfer learning. At first, I expand the NAS to the super-resolution domain and find a lightweight densely connected network named DeCoNASNet. I use a hierarchical search strategy to find the best connection with local and global features. In this process, I define a complexity-based-penalty and add it to the reward term of REINFORCE algorithm. Experiments show that my DeCoNASNet outperforms the state-of-the-art lightweight super-resolution networks designed by handcraft methods and existing NAS-based design. I propose a new search space design with multi-branch structure to enlarge the search space for capturing multi-scale features, resulting in better reconstruction on grainy areas. I also adopt parameter sharing scheme in multi-branch network to share their information and reduce the whole network parameter. Experiments show that the proposed method finds an optimal SISR network about twenty times faster than the existing methods, while showing comparable performance in terms of PSNR vs. parameters. Comparison of visual quality validates that the proposed SISR network reconstructs texture areas better than the previous methods because of the enlarged search space to find multi-scale features. Lastly, I apply meta-transfer learning to the NAS procedure for image super-resolution. I train the controller and child network with the meta-learning scheme, which enables the controllers to find promising network for several scale simultaneously. Furthermore, meta-trained child network is reused as the pre-trained parameters for final evaluation phase to improve the final image super-resolution results even better and search-evaluation gap problem is efficiently reduced.이미지 볡원은 λ‹€μ–‘ν•œ μ˜μƒμ²˜λ¦¬ 문제의 μ„±λŠ₯을 높이기 μœ„ν•œ μ „ 처리 λ‹¨κ³„λ‘œ μ‚¬μš©ν•  수 μžˆλŠ” μ€‘μš”ν•œ κΈ°μˆ μ΄λ‹€. 이미지 κ³ ν•΄μƒλ„ν™”λŠ” 이미지 볡원방법 쀑 μ€‘μš”ν•œ 문제의 ν•˜λ‚˜λ‘œμ¨ μ €ν•΄μƒλ„μ˜ 이미지λ₯Ό κ³ ν•΄μƒλ„μ˜ μ΄λ―Έμ§€λ‘œ λ³΅μ›ν•˜λŠ” 방법이닀. μ΅œκ·Όμ—λŠ” μ»¨λ²Œλ£¨μ…˜ 신경망 (CNN)을 μ‚¬μš©ν•˜λŠ” λ”₯ λŸ¬λ‹(deep learning) 기반의 방법듀이 단일 이미지 고해상도화 (SISR) 문제λ₯Ό ν‘ΈλŠ”λ° 많이 μ‚¬μš©λ˜κ³  μžˆλ‹€. 일반적으둜 이미지 고해상도화 μ„±λŠ₯은 CNN을 깊게 μŒ“κ±°λ‚˜ λ³΅μž‘ν•œ ꡬ쑰λ₯Ό μ„€κ³„ν•¨μœΌλ‘œμ¨ ν–₯μƒμ‹œν‚¬ 수 μžˆλ‹€. κ·ΈλŸ¬λ‚˜ 주어진 λ¬Έμ œμ— λŒ€ν•œ 졜적의 ꡬ쑰λ₯Ό μ°ΎλŠ” 것은 ν•΄λ‹Ή λΆ„μ•Όμ˜ 전문가라 ν•  지라도 μ–΄λ ΅κ³  μ‹œκ°„μ΄ 였래 κ±Έλ¦¬λŠ” μž‘μ—…μ΄λ‹€. μ΄λŸ¬ν•œ 이유둜 신경망 ꡬ좕 절차λ₯Ό μžλ™ν™”ν•˜λŠ” 신경망 ꡬ쑰 검색 (NAS) 방법이 λ„μž…λ˜μ—ˆλ‹€. 이 λ…Όλ¬Έμ—μ„œλŠ” 신경망 ꡬ쑰 검색 (NAS) 방법을 μ‚¬μš©ν•˜μ—¬ μƒˆλ‘œμš΄ 단일 이미지 고해상도화 방법을 μ œμ•ˆν•˜μ˜€λ‹€. 이 λ…Όλ¬Έμ—μ„œ μ œμ•ˆν•œ 방법은 크게 μ„Έ κ°€μ§€λ‘œ μš”μ•½ ν•  수 μžˆλ‹€. μ΄λŠ” 효율적인 신경망 검색기법(ENAS)을 μ΄μš©ν•œ 이미지 고해상도화, 병렬 신경망 검색 기법을 μ΄μš©ν•œ 이미지 고해상도화, 메타 전솑 ν•™μŠ΅μ„ μ΄μš©ν•˜λŠ” 신경망 검색기법을 ν†΅ν•œ 이미지 고해상도화 이닀. μš°μ„ , μš°λ¦¬λŠ” 주둜 μ˜μƒ λΆ„λ₯˜μ— μ“°μ΄λ˜ 신경망 검색 기법을 μ˜μƒ 고해상도화에 μ μš©ν•˜μ˜€μœΌλ©°, DeCoNASNet이라 λͺ…λͺ…λœ 신경망 ꡬ쑰λ₯Ό μ„€κ³„ν•˜μ˜€λ‹€. λ˜ν•œ 계측적 검색 μ „λž΅μ„ μ‚¬μš©ν•˜μ—¬ 지역/μ „μ—­ 피쳐(feature) 합병을 μœ„ν•œ μ΅œμƒμ˜ μ—°κ²° 방법을 κ²€μƒ‰ν•˜μ˜€λ‹€. 이 κ³Όμ •μ—μ„œ ν•„μš” λ³€μˆ˜κ°€ μ μœΌλ©΄μ„œ 쒋은 μ„±λŠ₯을 λ‚Ό 수 μžˆλ„λ‘ λ³΅μž‘μ„± 기반 νŽ˜λ„ν‹° (complexity-based penalty) λ₯Ό μ •μ˜ν•˜κ³  이λ₯Ό REINFORCE μ•Œκ³ λ¦¬μ¦˜μ˜ 보상 μ‹ ν˜Έμ— μΆ”κ°€ν•˜μ˜€λ‹€. μ‹€ν—˜ κ²°κ³Ό DeCoNASNet은 기쑴의 μ‚¬λžŒμ΄ 직접 μ„€κ³„ν•œ 신경망과 신경망 검색 기법을 기반으둜 μ„€κ³„λœ 졜근의 고해상도화 ꡬ쑰의 μ„±λŠ₯을 λŠ₯κ°€ν•˜λŠ” 것을 확인 ν•  수 μžˆμ—ˆλ‹€. μš°λ¦¬λŠ” λ˜ν•œ μ—¬λŸ¬ 크기의 피쳐(feature)λ₯Ό ν•™μŠ΅ν•˜κΈ° μœ„ν•΄ 신경망 검색 κΈ°λ²•μ˜ 검색 곡간을 ν™•λŒ€ν•˜μ—¬ 병렬 신경망을 μ„€κ³„ν•˜λŠ” 방법을 μ œμ•ˆν•˜μ˜€λ‹€. 이 λ•Œ, λ³‘λ ¬μ‹ κ²½λ§μ˜ 각 μœ„μΉ˜μ—μ„œ 맀개 λ³€μˆ˜λ₯Ό κ³΅μœ ν•  수 μžˆλ„λ‘ ν•˜μ—¬ λ³‘λ ¬μ‹ κ²½λ§μ˜ 각 ꡬ쑰끼리 정보λ₯Ό κ³΅μœ ν•˜κ³  전체 ꡬ쑰λ₯Ό μ„€κ³„ν•˜λŠ”λ° ν•„μš”ν•œ 맀개 λ³€μˆ˜λ₯Ό 쀄이도둝 ν•˜μ˜€λ‹€. μ‹€ν—˜ κ²°κ³Ό μ œμ•ˆλœ 방법을 톡해 맀개 λ³€μˆ˜ 크기 λŒ€λΉ„ μ„±λŠ₯이 쒋은 신경망 ꡬ쑰λ₯Ό 찾을 수 μžˆμ—ˆλ‹€. μ‹€ν—˜ κ²°κ³Όλ₯Ό 톡해 ν™•μž₯된 검색 κ³΅κ°„μ—μ„œ μ—¬λŸ¬ 크기의 피쳐 (feature)λ₯Ό ν•™μŠ΅ν•˜μ˜€κΈ° λ•Œλ¬Έμ— 이전 방법보닀 λ³΅μž‘ν•œ μ˜μ—­μ„ 더 잘 λ³΅μ›ν•˜λŠ” 것을 ν™•μΈν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ 메타 전솑 ν•™μŠ΅(meta-transfer learning)을 신경망 검색에 μ μš©ν•˜μ—¬ λ‹€μ–‘ν•œ 크기의 이미지 고해상도화 문제λ₯Ό ν•΄κ²°ν•˜λŠ” 방법을 μ œμ•ˆν•˜μ˜€λ‹€. 이 λ…Όλ¬Έμ—μ„œλŠ” 메타 전솑 ν•™μŠ΅ 방법을 톡해 μ œμ–΄κΈ°κ°€ μ—¬λŸ¬ 크기의 쒋은 신경망 ꡬ쑰λ₯Ό λ™μ‹œμ— 찾을 수 μžˆλ„λ‘ μ„€κ³„ν•˜μ˜€λ‹€. λ˜ν•œ 메타 ν›ˆλ ¨λœ 신경망 κ΅¬μ‘°λŠ” μ΅œμ’… μ„±λŠ₯ 평가 μ‹œ ν•™μŠ΅μ˜ μ‹œμž‘μ μœΌλ‘œ μž¬μ‚¬μš© λ˜μ–΄ μ΅œμ’… 이미지 고해상도화 μ„±λŠ₯을 λ”μš± ν–₯μƒμ‹œν‚¬ 수 μžˆμ—ˆμœΌλ©°, 효과적으둜 검색-평가 괴리 문제λ₯Ό ν•΄κ²°ν•˜μ˜€λ‹€.1 INTRODUCTION 1 1.1 contribution 3 1.2 contents 4 2 Neural Architecture Search for Image Super-Resolution Using Densely Constructed Search Space: DeCoNAS 5 2.1 Introduction 5 2.2 Proposed Method 9 2.2.1 Overall structure of DeCoNASNet 9 2.2.2 Constructing the DNB 11 2.2.3 Constructing controller for the DeCoNASNet 13 2.2.4 Training DeCoNAS and complexity-based penalty 13 2.3 Experimental results 15 2.3.1 Settings 15 2.3.2 Results 16 2.3.3 Ablation study 21 2.4 Summary 22 3 Multi-Branch Neural Architecture Search for Lightweight Image Super-resolution 23 3.1 Introduction 23 3.2 Related Work 26 3.2.1 Single image super-resolution 26 3.2.2 Neural architecture search 27 3.2.3 Image super-resolution with neural architecture search 29 3.3 Method 32 3.3.1 Overview of the Proposed MBNAS 32 3.3.2 Controller and complexity-based penalty 33 3.3.3 MBNASNet 35 3.3.4 Multi-scale block with partially shared Nodes 37 3.3.5 MBNAS 38 3.4 datasets and experiments 39 3.4.1 Settings 39 3.4.2 Experiments on single image super-resolution (SISR) 41 3.5 Discussion 48 3.5.1 Effect of the complexity-based penalty to the performance of controller 49 3.5.2 Effect of multi-branch structure and partial parameter sharing scheme 50 3.5.3 Effect of gradient flow control weights and complexity-based penalty coefficient 51 3.6 Summary 52 4 Meta-transfer learning for simultaneous search of various scale image super-resolution 54 4.1 Introduction 54 4.2 Related Work 56 4.2.1 Single image super-resolution 56 4.2.2 Neural architecture search 57 4.2.3 Image super-resolution with neural architecture search 58 4.2.4 Meta-learning 59 4.3 Method 59 4.3.1 Meta-learning 60 4.3.2 Meta-transfer learning 62 4.3.3 Transfer-learning 63 4.4 datasets and experiments 63 4.4.1 Settings 63 4.4.2 Experiments on single image super-resolution(SISR) 64 4.5 Summary 66 5 Conclusion 69 Abstract (In Korean) 80λ°•

    FRESH – FRI-based single-image super-resolution algorithm

    Get PDF
    In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels
    • …
    corecore