167 research outputs found
Factored axis-aligned filtering for rendering multiple distribution effects
Monte Carlo (MC) ray-tracing for photo-realistic rendering often requires hours to render a single image due to the large sampling rates needed for convergence. Previous methods have attempted to filter sparsely sampled MC renders but these methods have high reconstruction overheads. Recent work has shown fast performance for individual effects, like soft shadows and indirect illumination, using axis-aligned filtering. While some components of light transport such as indirect or area illumination are smooth, they are often multiplied by high-frequency components such as texture, which prevents their sparse sampling and reconstruction.
We propose an approach to adaptively sample and filter for simultaneously rendering primary (defocus blur) and secondary (soft shadows and indirect illumination) distribution effects, based on a multi-dimensional frequency analysis of the direct and indirect illumination light fields. We describe a novel approach of factoring texture and irradiance in the presence of defocus blur, which allows for pre-filtering noisy irradiance when the texture is not noisy. Our approach naturally allows for different sampling rates for primary and secondary effects, further reducing the overall ray count. While the theory considers only Lambertian surfaces, we obtain promising results for moderately glossy surfaces. We demonstrate 30x sampling rate reduction compared to equal quality noise-free MC. Combined with a GPU implementation and low filtering over-head, we can render scenes with complex geometry and diffuse and glossy BRDFs in a few seconds.National Science Foundation (U.S.) (Grant CGV 1115242)National Science Foundation (U.S.) (Grant CGV 1116303)Intel Corporation (Science and Technology Center for Visual Computing
Temporal light field reconstruction for rendering distribution effects
Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the high-dimensional integrand. In this paper, we describe a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor. We show that our technique can be applied in situations that are challenging or impossible for previous anisotropic reconstruction methods, and that it can yield good results with very sparse inputs. We demonstrate our method for simultaneous motion blur, depth of field, and soft shadows
Dr.Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering
Bokeh is widely used in photography to draw attention to the subject while
effectively isolating distractions in the background. Computational methods
simulate bokeh effects without relying on a physical camera lens. However, in
the realm of digital bokeh synthesis, the two main challenges for bokeh
synthesis are color bleeding and partial occlusion at object boundaries. Our
primary goal is to overcome these two major challenges using physics principles
that define bokeh formation. To achieve this, we propose a novel and accurate
filtering-based bokeh rendering equation and a physically-based occlusion-aware
bokeh renderer, dubbed Dr.Bokeh, which addresses the aforementioned challenges
during the rendering stage without the need of post-processing or data-driven
approaches. Our rendering algorithm first preprocesses the input RGBD to obtain
a layered scene representation. Dr.Bokeh then takes the layered representation
and user-defined lens parameters to render photo-realistic lens blur. By
softening non-differentiable operations, we make Dr.Bokeh differentiable such
that it can be plugged into a machine-learning framework. We perform
quantitative and qualitative evaluations on synthetic and real-world images to
validate the effectiveness of the rendering quality and the differentiability
of our method. We show Dr.Bokeh not only outperforms state-of-the-art bokeh
rendering algorithms in terms of photo-realism but also improves the depth
quality from depth-from-defocus
Combined conjugate and pupil adaptive optics in widefield microscopy
Traditionally, adaptive optics (AO) systems for microscopy have focused on AO at the pupil plane, however this produces poor performance in samples with both spatially-variant aberrations, such as non-flat sample interfaces, and spatially-invariant aberrations, such as spherical aberration due to a difference between the sample index of refraction and the sample for which the objective was designed. Here, we demonstrate well-corrected, wide field-of-view (FOV) microscopy by simultaneously correcting the two types of aberrations using two AO loops. Such an approach is necessary in wide-field applications where both types of aberration may be present, as each AO loop can only fully correct one type of aberration. Wide FOV corrections are demonstrated in a trans-illumination microscope equipped with two deformable mirrors (DMs), using a partitioned aperture wavefront (PAW) sensor to directly control the DM conjugated to the sample interface and a sensor-less genetic algorithm to control the DM conjugated to the objective’s pupil
Online Video Deblurring via Dynamic Temporal Blending Network
State-of-the-art video deblurring methods are capable of removing non-uniform
blur caused by unwanted camera shake and/or object motion in dynamic scenes.
However, most existing methods are based on batch processing and thus need
access to all recorded frames, rendering them computationally demanding and
time consuming and thus limiting their practical use. In contrast, we propose
an online (sequential) video deblurring method based on a spatio-temporal
recurrent network that allows for real-time performance. In particular, we
introduce a novel architecture which extends the receptive field while keeping
the overall size of the network small to enable fast execution. In doing so,
our network is able to remove even large blur caused by strong camera shake
and/or fast moving objects. Furthermore, we propose a novel network layer that
enforces temporal consistency between consecutive frames by dynamic temporal
blending which compares and adaptively (at test time) shares features obtained
at different time steps. We show the superiority of the proposed method in an
extensive experimental evaluation.Comment: 10 page
동적 환경 디블러링을 위한 새로운 모델, 알로기즘, 그리고 해석에 관한 연구
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 이경무.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes.
Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static.
Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization.
The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end.
First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time.
With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1
Chapter 2 Image Deblurring with Segmentation 7
2.1 Introduction and Related Work 7
2.2 Segmentation-based Dynamic Scene Deblurring Model 11
2.2.1 Adaptive blur model selection 13
2.2.2 Regularization 14
2.3 Optimization 17
2.3.1 Sharp image restoration 18
2.3.2 Weight estimation 19
2.3.3 Kernel estimation 23
2.3.4 Overall procedure 25
2.4 Experiments 25
2.5 Summary 27
Chapter 3 Image Deblurring with Exemplar 33
3.1 Introduction and Related Work 35
3.2 Method Overview 37
3.3 Stage I: Exemplar Acquisition 38
3.3.1 Sharp image acquisition and preprocessing 38
3.3.2 Exemplar from blur-aware optical flow estimation 40
3.4 Stage II: Exemplar-based Deblurring 42
3.4.1 Exemplar-based latent image restoration 43
3.4.2 Motion-aware segmentation 44
3.4.3 Robust kernel estimation 45
3.4.4 Unified energy model and optimization 47
3.5 Stage III: Post-processing and Refinement 47
3.6 Experiments 49
3.7 Summary 53
Chapter 4 Image Deblurring with Kernel-Parametrization 57
4.1 Introduction and Related Work 59
4.2 Preliminary 60
4.3 Proposed Method 62
4.3.1 Image-statistics-guided motion 62
4.3.2 Adaptive variational deblurring model 64
4.4 Optimization 69
4.4.1 Motion estimation 70
4.4.2 Latent image restoration 72
4.4.3 Kernel re-initialization 73
4.5 Experiments 75
4.6 Summary 80
Chapter 5 Video Deblurring with Kernel-Parametrization 87
5.1 Introduction and Related Work 87
5.2 Generalized Video Deblurring 93
5.2.1 A new data model based on kernel-parametrization 94
5.2.2 A new optical flow constraint and temporal regularization 104
5.2.3 Spatial regularization 105
5.3 Optimization Framework 107
5.3.1 Sharp video restoration 108
5.3.2 Optical flows estimation 109
5.3.3 Defocus blur map estimation 110
5.4 Implementation Details 111
5.4.1 Initialization and duty cycle estimation 112
5.4.2 Occlusion detection and refinement 113
5.5 Motion Blur Dataset 114
5.5.1 Dataset generation 114
5.6 Experiments 116
5.7 Summary 120
Chapter 6 Conclusion 127
Bibliography 131
국문 초록 141Docto
- …