5,143 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
Asynchronously operating event cameras find many applications due to their
high dynamic range, no motion blur, low latency and low data bandwidth. The
field has seen remarkable progress during the last few years, and existing
event-based 3D reconstruction approaches recover sparse point clouds of the
scene. However, such sparsity is a limiting factor in many cases, especially in
computer vision and graphics, that has not been addressed satisfactorily so
far. Accordingly, this paper proposes the first approach for 3D-consistent,
dense and photorealistic novel view synthesis using just a single colour event
stream as input. At the core of our method is a neural radiance field trained
entirely in a self-supervised manner from events while preserving the original
resolution of the colour event channels. Next, our ray sampling strategy is
tailored to events and allows for data-efficient training. At test, our method
produces results in the RGB space at unprecedented quality. We evaluate our
method qualitatively and quantitatively on several challenging synthetic and
real scenes and show that it produces significantly denser and more visually
appealing renderings than the existing methods. We also demonstrate robustness
in challenging scenarios with fast motion and under low lighting conditions. We
will release our dataset and our source code to facilitate the research field,
see https://4dqv.mpi-inf.mpg.de/EventNeRF/.Comment: 18 pages, 18 figures, 3 table
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
Asynchronously operating event cameras find many applications due to theirhigh dynamic range, no motion blur, low latency and low data bandwidth. Thefield has seen remarkable progress during the last few years, and existingevent-based 3D reconstruction approaches recover sparse point clouds of thescene. However, such sparsity is a limiting factor in many cases, especially incomputer vision and graphics, that has not been addressed satisfactorily sofar. Accordingly, this paper proposes the first approach for 3D-consistent,dense and photorealistic novel view synthesis using just a single colour eventstream as input. At the core of our method is a neural radiance field trainedentirely in a self-supervised manner from events while preserving the originalresolution of the colour event channels. Next, our ray sampling strategy istailored to events and allows for data-efficient training. At test, our methodproduces results in the RGB space at unprecedented quality. We evaluate ourmethod qualitatively and quantitatively on several challenging synthetic andreal scenes and show that it produces significantly denser and more visuallyappealing renderings than the existing methods. We also demonstrate robustnessin challenging scenarios with fast motion and under low lighting conditions. Wewill release our dataset and our source code to facilitate the research field,see https://4dqv.mpi-inf.mpg.de/EventNeRF/.<br
- …