5,222 research outputs found
Efficient Residual Dense Block Search for Image Super-Resolution
Although remarkable progress has been made on single image super-resolution
due to the revival of deep convolutional neural networks, deep learning methods
are confronted with the challenges of computation and memory consumption in
practice, especially for mobile devices. Focusing on this issue, we propose an
efficient residual dense block search algorithm with multiple objectives to
hunt for fast, lightweight and accurate networks for image super-resolution.
Firstly, to accelerate super-resolution network, we exploit the variation of
feature scale adequately with the proposed efficient residual dense blocks. In
the proposed evolutionary algorithm, the locations of pooling and upsampling
operator are searched automatically. Secondly, network architecture is evolved
with the guidance of block credits to acquire accurate super-resolution
network. The block credit reflects the effect of current block and is earned
during model evaluation process. It guides the evolution by weighing the
sampling probability of mutation to favor admirable blocks. Extensive
experimental results demonstrate the effectiveness of the proposed searching
method and the found efficient super-resolution models achieve better
performance than the state-of-the-art methods with limited number of parameters
and FLOPs
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
μ¬μΈ΅ μ κ²½λ§ κ²μ κΈ°λ²μ μ¬μ©ν μ΄λ―Έμ§ 볡μ
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2021.8. μμ€μ.Image restoration is an important technology which can be used as a pre-processing step to increase the performances of various vision tasks. Image super-resolution is one of the important task in image restoration which restores a high-resolution (HR) image from low-resolution (LR) observation. The recent progress of deep convolutional neural networks has enabled great success in single image super-resolution (SISR). its performance is also being increased by deepening the networks and developing more sophisticated network structures. However, finding an optimal structure for the given problem is a difficult task, even for human experts. For this reason, neural architecture search (NAS) methods have been introduced, which automate the procedure of constructing the structures. In this dissertation, I propose a new single image super-resolution framework by using neural architecture search (NAS) method. As the performance improves, the network becomes more complex and deeper, so I apply NAS algorithm to find the optimal network while reducing the effort in network design. In detail, the proposed scheme is summarized to three topics: image super-resolution using efficient neural architecture search, multi-branch neural architecture search for lightweight image super-resolution, and neural architecture search for image super-resolution using meta-transfer learning.
At first, I expand the NAS to the super-resolution domain and find a lightweight densely connected network named DeCoNASNet. I use a hierarchical search strategy to find the best connection with local and global features. In this process, I define a complexity-based-penalty and add it to the reward term of REINFORCE algorithm. Experiments show that my DeCoNASNet outperforms the state-of-the-art lightweight super-resolution networks designed by handcraft methods and existing NAS-based design.
I propose a new search space design with multi-branch structure to enlarge the search space for capturing multi-scale features, resulting in better reconstruction on grainy areas. I also adopt parameter sharing scheme in multi-branch network to share their information and reduce the whole network parameter. Experiments show that the proposed method finds an optimal SISR network about twenty times faster than the existing methods, while showing
comparable performance in terms of PSNR vs. parameters. Comparison of visual quality validates that the proposed SISR network reconstructs texture areas better than the previous methods because of the enlarged search space to find multi-scale features.
Lastly, I apply meta-transfer learning to the NAS procedure for image super-resolution. I train the controller and child network with the meta-learning scheme, which enables the controllers to find promising network for several scale simultaneously. Furthermore, meta-trained child network is reused as the pre-trained parameters for final evaluation phase to improve the final image super-resolution results even better and search-evaluation gap problem is efficiently reduced.μ΄λ―Έμ§ 볡μμ λ€μν μμμ²λ¦¬ λ¬Έμ μ μ±λ₯μ λμ΄κΈ° μν μ μ²λ¦¬ λ¨κ³λ‘ μ¬μ©ν μ μλ μ€μν κΈ°μ μ΄λ€. μ΄λ―Έμ§ κ³ ν΄μλνλ μ΄λ―Έμ§ 볡μλ°©λ² μ€ μ€μν λ¬Έμ μ νλλ‘μ¨ μ ν΄μλμ μ΄λ―Έμ§λ₯Ό κ³ ν΄μλμ μ΄λ―Έμ§λ‘ 볡μνλ λ°©λ²μ΄λ€. μ΅κ·Όμλ 컨λ²λ£¨μ
μ κ²½λ§ (CNN)μ μ¬μ©νλ λ₯ λ¬λ(deep learning) κΈ°λ°μ λ°©λ²λ€μ΄ λ¨μΌ μ΄λ―Έμ§ κ³ ν΄μλν (SISR) λ¬Έμ λ₯Ό νΈλλ° λ§μ΄ μ¬μ©λκ³ μλ€. μΌλ°μ μΌλ‘ μ΄λ―Έμ§ κ³ ν΄μλν μ±λ₯μ CNNμ κΉκ² μκ±°λ 볡μ‘ν ꡬ쑰λ₯Ό μ€κ³ν¨μΌλ‘μ¨ ν₯μμν¬ μ μλ€.
κ·Έλ¬λ μ£Όμ΄μ§ λ¬Έμ μ λν μ΅μ μ ꡬ쑰λ₯Ό μ°Ύλ κ²μ ν΄λΉ λΆμΌμ μ λ¬Έκ°λΌ ν μ§λΌλ μ΄λ ΅κ³ μκ°μ΄ μ€λ 걸리λ μμ
μ΄λ€. μ΄λ¬ν μ΄μ λ‘ μ κ²½λ§ κ΅¬μΆ μ μ°¨λ₯Ό μλννλ μ κ²½λ§ κ΅¬μ‘° κ²μ (NAS) λ°©λ²μ΄ λμ
λμλ€. μ΄ λ
Όλ¬Έμμλ μ κ²½λ§ κ΅¬μ‘° κ²μ (NAS) λ°©λ²μ μ¬μ©νμ¬ μλ‘μ΄ λ¨μΌ μ΄λ―Έμ§ κ³ ν΄μλν λ°©λ²μ μ μνμλ€.
μ΄ λ
Όλ¬Έμμ μ μν λ°©λ²μ ν¬κ² μΈ κ°μ§λ‘ μμ½ ν μ μλ€. μ΄λ ν¨μ¨μ μΈ μ κ²½λ§ κ²μκΈ°λ²(ENAS)μ μ΄μ©ν μ΄λ―Έμ§ κ³ ν΄μλν, λ³λ ¬ μ κ²½λ§ κ²μ κΈ°λ²μ μ΄μ©ν μ΄λ―Έμ§ κ³ ν΄μλν, λ©ν μ μ‘ νμ΅μ μ΄μ©νλ μ κ²½λ§ κ²μκΈ°λ²μ ν΅ν μ΄λ―Έμ§ κ³ ν΄μλν μ΄λ€. μ°μ , μ°λ¦¬λ μ£Όλ‘ μμ λΆλ₯μ μ°μ΄λ μ κ²½λ§ κ²μ κΈ°λ²μ μμ κ³ ν΄μλνμ μ μ©νμμΌλ©°, DeCoNASNetμ΄λΌ λͺ
λͺ
λ μ κ²½λ§ κ΅¬μ‘°λ₯Ό μ€κ³νμλ€. λν κ³μΈ΅μ κ²μ μ λ΅μ μ¬μ©νμ¬ μ§μ/μ μ νΌμ³(feature) ν©λ³μ μν μ΅μμ μ°κ²° λ°©λ²μ κ²μνμλ€. μ΄ κ³Όμ μμ νμ λ³μκ° μ μΌλ©΄μ μ’μ μ±λ₯μ λΌ μ μλλ‘ λ³΅μ‘μ± κΈ°λ° νλν° (complexity-based penalty) λ₯Ό μ μνκ³ μ΄λ₯Ό REINFORCE μκ³ λ¦¬μ¦μ 보μ μ νΈμ μΆκ°νμλ€. μ€ν κ²°κ³Ό DeCoNASNetμ κΈ°μ‘΄μ μ¬λμ΄ μ§μ μ€κ³ν μ κ²½λ§κ³Ό μ κ²½λ§ κ²μ κΈ°λ²μ κΈ°λ°μΌλ‘ μ€κ³λ μ΅κ·Όμ κ³ ν΄μλν ꡬ쑰μ μ±λ₯μ λ₯κ°νλ κ²μ νμΈ ν μ μμλ€.
μ°λ¦¬λ λν μ¬λ¬ ν¬κΈ°μ νΌμ³(feature)λ₯Ό νμ΅νκΈ° μν΄ μ κ²½λ§ κ²μ κΈ°λ²μ κ²μ 곡κ°μ νλνμ¬ λ³λ ¬ μ κ²½λ§μ μ€κ³νλ λ°©λ²μ μ μνμλ€. μ΄ λ, λ³λ ¬μ κ²½λ§μ κ° μμΉμμ λ§€κ° λ³μλ₯Ό 곡μ ν μ μλλ‘ νμ¬ λ³λ ¬μ κ²½λ§μ κ° κ΅¬μ‘°λΌλ¦¬ μ 보λ₯Ό 곡μ νκ³ μ 체 ꡬ쑰λ₯Ό μ€κ³νλλ° νμν λ§€κ° λ³μλ₯Ό μ€μ΄λλ‘ νμλ€. μ€ν κ²°κ³Ό μ μλ λ°©λ²μ ν΅ν΄ λ§€κ° λ³μ ν¬κΈ° λλΉ μ±λ₯μ΄ μ’μ μ κ²½λ§ κ΅¬μ‘°λ₯Ό μ°Ύμ μ μμλ€. μ€ν κ²°κ³Όλ₯Ό ν΅ν΄ νμ₯λ κ²μ 곡κ°μμ μ¬λ¬ ν¬κΈ°μ νΌμ³ (feature)λ₯Ό νμ΅νμκΈ° λλ¬Έμ μ΄μ λ°©λ²λ³΄λ€ 볡μ‘ν μμμ λ μ 볡μνλ κ²μ νμΈνμλ€.
λ§μ§λ§μΌλ‘ λ©ν μ μ‘ νμ΅(meta-transfer learning)μ μ κ²½λ§ κ²μμ μ μ©νμ¬ λ€μν ν¬κΈ°μ μ΄λ―Έμ§ κ³ ν΄μλν λ¬Έμ λ₯Ό ν΄κ²°νλ λ°©λ²μ μ μνμλ€. μ΄ λ
Όλ¬Έμμλ λ©ν μ μ‘ νμ΅ λ°©λ²μ ν΅ν΄ μ μ΄κΈ°κ° μ¬λ¬ ν¬κΈ°μ μ’μ μ κ²½λ§ κ΅¬μ‘°λ₯Ό λμμ μ°Ύμ μ μλλ‘ μ€κ³νμλ€. λν λ©ν νλ ¨λ μ κ²½λ§ κ΅¬μ‘°λ μ΅μ’
μ±λ₯ νκ° μ νμ΅μ μμμ μΌλ‘ μ¬μ¬μ© λμ΄ μ΅μ’
μ΄λ―Έμ§ κ³ ν΄μλν μ±λ₯μ λμ± ν₯μμν¬ μ μμμΌλ©°, ν¨κ³Όμ μΌλ‘ κ²μ-νκ° κ΄΄λ¦¬ λ¬Έμ λ₯Ό ν΄κ²°νμλ€.1 INTRODUCTION 1
1.1 contribution 3
1.2 contents 4
2 Neural Architecture Search for Image Super-Resolution Using Densely Constructed Search Space: DeCoNAS 5
2.1 Introduction 5
2.2 Proposed Method 9
2.2.1 Overall structure of DeCoNASNet 9
2.2.2 Constructing the DNB 11
2.2.3 Constructing controller for the DeCoNASNet 13
2.2.4 Training DeCoNAS and complexity-based penalty 13
2.3 Experimental results 15
2.3.1 Settings 15
2.3.2 Results 16
2.3.3 Ablation study 21
2.4 Summary 22
3 Multi-Branch Neural Architecture Search for Lightweight Image Super-resolution 23
3.1 Introduction 23
3.2 Related Work 26
3.2.1 Single image super-resolution 26
3.2.2 Neural architecture search 27
3.2.3 Image super-resolution with neural architecture search 29
3.3 Method 32
3.3.1 Overview of the Proposed MBNAS 32
3.3.2 Controller and complexity-based penalty 33
3.3.3 MBNASNet 35
3.3.4 Multi-scale block with partially shared Nodes 37
3.3.5 MBNAS 38
3.4 datasets and experiments 39
3.4.1 Settings 39
3.4.2 Experiments on single image super-resolution (SISR) 41
3.5 Discussion 48
3.5.1 Effect of the complexity-based penalty to the performance of controller 49
3.5.2 Effect of multi-branch structure and partial parameter sharing scheme 50
3.5.3 Effect of gradient flow control weights and complexity-based penalty coefficient 51
3.6 Summary 52
4 Meta-transfer learning for simultaneous search of various scale image super-resolution 54
4.1 Introduction 54
4.2 Related Work 56
4.2.1 Single image super-resolution 56
4.2.2 Neural architecture search 57
4.2.3 Image super-resolution with neural architecture search 58
4.2.4 Meta-learning 59
4.3 Method 59
4.3.1 Meta-learning 60
4.3.2 Meta-transfer learning 62
4.3.3 Transfer-learning 63
4.4 datasets and experiments 63
4.4.1 Settings 63
4.4.2 Experiments on single image super-resolution(SISR) 64
4.5 Summary 66
5 Conclusion 69
Abstract (In Korean) 80λ°
- β¦