1,234 research outputs found
Adaptive Window Pruning for Efficient Local Motion Deblurring
Local motion blur commonly occurs in real-world photography due to the mixing
between moving objects and stationary backgrounds during exposure. Existing
image deblurring methods predominantly focus on global deblurring,
inadvertently affecting the sharpness of backgrounds in locally blurred images
and wasting unnecessary computation on sharp pixels, especially for
high-resolution images. This paper aims to adaptively and efficiently restore
high-resolution locally blurred images. We propose a local motion deblurring
vision Transformer (LMD-ViT) built on adaptive window pruning Transformer
blocks (AdaWPT). To focus deblurring on local regions and reduce computation,
AdaWPT prunes unnecessary windows, only allowing the active windows to be
involved in the deblurring processes. The pruning operation relies on the
blurriness confidence predicted by a confidence predictor that is trained
end-to-end using a reconstruction loss with Gumbel-Softmax re-parameterization
and a pruning loss guided by annotated blur masks. Our method removes local
motion blur effectively without distorting sharp regions, demonstrated by its
exceptional perceptual and quantitative improvements compared to
state-of-the-art methods. In addition, our approach substantially reduces FLOPs
by 66% and achieves more than a twofold increase in inference speed compared to
Transformer-based deblurring methods. We will make our code and annotated blur
masks publicly available.Comment: 17 page
A deep learning framework for quality assessment and restoration in video endoscopy
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. Artifacts such as motion blur, bubbles,
specular reflections, floating objects and pixel saturation impede the visual
interpretation and the automated analysis of endoscopy videos. Given the
widespread use of endoscopy in different clinical applications, we contend that
the robust and reliable identification of such artifacts and the automated
restoration of corrupted video frames is a fundamental medical imaging problem.
Existing state-of-the-art methods only deal with the detection and restoration
of selected artifacts. However, typically endoscopy videos contain numerous
artifacts which motivates to establish a comprehensive solution.
We propose a fully automatic framework that can: 1) detect and classify six
different primary artifacts, 2) provide a quality score for each frame and 3)
restore mildly corrupted frames. To detect different artifacts our framework
exploits fast multi-scale, single stage convolutional neural network detector.
We introduce a quality metric to assess frame quality and predict image
restoration success. Generative adversarial networks with carefully chosen
regularization are finally used to restore corrupted frames.
Our detector yields the highest mean average precision (mAP at 5% threshold)
of 49.0 and the lowest computational time of 88 ms allowing for accurate
real-time processing. Our restoration models for blind deblurring, saturation
correction and inpainting demonstrate significant improvements over previous
methods. On a set of 10 test videos we show that our approach preserves an
average of 68.7% which is 25% more frames than that retained from the raw
videos.Comment: 14 page
Selected Topics in Bayesian Image/Video Processing
In this dissertation, three problems in image deblurring, inpainting and virtual content insertion are solved in a Bayesian framework.;Camera shake, motion or defocus during exposure leads to image blur. Single image deblurring has achieved remarkable results by solving a MAP problem, but there is no perfect solution due to inaccurate image prior and estimator. In the first part, a new non-blind deconvolution algorithm is proposed. The image prior is represented by a Gaussian Scale Mixture(GSM) model, which is estimated from non-blurry images as training data. Our experimental results on a total twelve natural images have shown that more details are restored than previous deblurring algorithms.;In augmented reality, it is a challenging problem to insert virtual content in video streams by blending it with spatial and temporal information. A generic virtual content insertion (VCI) system is introduced in the second part. To the best of my knowledge, it is the first successful system to insert content on the building facades from street view video streams. Without knowing camera positions, the geometry model of a building facade is established by using a detection and tracking combined strategy. Moreover, motion stabilization, dynamic registration and color harmonization contribute to the excellent augmented performance in this automatic VCI system.;Coding efficiency is an important objective in video coding. In recent years, video coding standards have been developing by adding new tools. However, it costs numerous modifications in the complex coding systems. Therefore, it is desirable to consider alternative standard-compliant approaches without modifying the codec structures. In the third part, an exemplar-based data pruning video compression scheme for intra frame is introduced. Data pruning is used as a pre-processing tool to remove part of video data before they are encoded. At the decoder, missing data is reconstructed by a sparse linear combination of similar patches. The novelty is to create a patch library to exploit similarity of patches. The scheme achieves an average 4% bit rate reduction on some high definition videos
Image partial blur detection and classification.
Liu, Renting.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 40-46).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Related Work and System Overview --- p.6Chapter 2.1 --- Previous Work in Blur Analysis --- p.6Chapter 2.1.1 --- Blur detection and estimation --- p.6Chapter 2.1.2 --- Image deblurring --- p.8Chapter 2.1.3 --- Low DoF image auto-segmentation --- p.14Chapter 2.2 --- System Overview --- p.15Chapter 3 --- Blur Features and Classification --- p.18Chapter 3.1 --- Blur Features --- p.18Chapter 3.1.1 --- Local Power Spectrum Slope --- p.19Chapter 3.1.2 --- Gradient Histogram Span --- p.21Chapter 3.1.3 --- Maximum Saturation --- p.24Chapter 3.1.4 --- Local Autocorrelation Congruency --- p.25Chapter 3.2 --- Classification --- p.28Chapter 4 --- Experiments and Results --- p.29Chapter 4.1 --- Blur Patch Detection --- p.29Chapter 4.2 --- Blur degree --- p.33Chapter 4.3 --- Blur Region Segmentation --- p.34Chapter 5 --- Conclusion and Future Work --- p.38Bibliography --- p.40Chapter A --- Blurred Edge Analysis --- p.4
๋์ ํ๊ฒฝ ๋๋ธ๋ฌ๋ง์ ์ํ ์๋ก์ด ๋ชจ๋ธ, ์๋ก๊ธฐ์ฆ, ๊ทธ๋ฆฌ๊ณ ํด์์ ๊ดํ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2016. 8. ์ด๊ฒฝ๋ฌด.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes.
Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static.
Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization.
The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end.
First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time.
With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1
Chapter 2 Image Deblurring with Segmentation 7
2.1 Introduction and Related Work 7
2.2 Segmentation-based Dynamic Scene Deblurring Model 11
2.2.1 Adaptive blur model selection 13
2.2.2 Regularization 14
2.3 Optimization 17
2.3.1 Sharp image restoration 18
2.3.2 Weight estimation 19
2.3.3 Kernel estimation 23
2.3.4 Overall procedure 25
2.4 Experiments 25
2.5 Summary 27
Chapter 3 Image Deblurring with Exemplar 33
3.1 Introduction and Related Work 35
3.2 Method Overview 37
3.3 Stage I: Exemplar Acquisition 38
3.3.1 Sharp image acquisition and preprocessing 38
3.3.2 Exemplar from blur-aware optical flow estimation 40
3.4 Stage II: Exemplar-based Deblurring 42
3.4.1 Exemplar-based latent image restoration 43
3.4.2 Motion-aware segmentation 44
3.4.3 Robust kernel estimation 45
3.4.4 Unified energy model and optimization 47
3.5 Stage III: Post-processing and Refinement 47
3.6 Experiments 49
3.7 Summary 53
Chapter 4 Image Deblurring with Kernel-Parametrization 57
4.1 Introduction and Related Work 59
4.2 Preliminary 60
4.3 Proposed Method 62
4.3.1 Image-statistics-guided motion 62
4.3.2 Adaptive variational deblurring model 64
4.4 Optimization 69
4.4.1 Motion estimation 70
4.4.2 Latent image restoration 72
4.4.3 Kernel re-initialization 73
4.5 Experiments 75
4.6 Summary 80
Chapter 5 Video Deblurring with Kernel-Parametrization 87
5.1 Introduction and Related Work 87
5.2 Generalized Video Deblurring 93
5.2.1 A new data model based on kernel-parametrization 94
5.2.2 A new optical flow constraint and temporal regularization 104
5.2.3 Spatial regularization 105
5.3 Optimization Framework 107
5.3.1 Sharp video restoration 108
5.3.2 Optical flows estimation 109
5.3.3 Defocus blur map estimation 110
5.4 Implementation Details 111
5.4.1 Initialization and duty cycle estimation 112
5.4.2 Occlusion detection and refinement 113
5.5 Motion Blur Dataset 114
5.5.1 Dataset generation 114
5.6 Experiments 116
5.7 Summary 120
Chapter 6 Conclusion 127
Bibliography 131
๊ตญ๋ฌธ ์ด๋ก 141Docto
- โฆ