1,180 research outputs found
Reconstruction of high dynamic range images with poisson noise modeling and integrated denoising
In this paper, we present a new method for High Dynamic Range (HDR) reconstruction based on a set of multiple photographs with different exposure times. While most existing techniques take a deterministic approach by assuming that the acquired low dynamic range (LDR) images are noise-free, we explicitly model the photon arrival process by assuming sensor data corrupted by Poisson noise. Taking the noise characteristics of the sensor data into account leads to a more robust way to estimate the non-parametric camera response function (CRF) compared to existing techniques. To further improve the HDR reconstruction, we adopt the split-Bregman framework and use Total Variation for regularization. Experimental results on real camera images and ground-truth data show the effectiveness of the proposed approach
Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing
Wavelets have been used extensively for several years now in astronomy for
many purposes, ranging from data filtering and deconvolution, to star and
galaxy detection or cosmic ray removal. More recent sparse representations such
ridgelets or curvelets have also been proposed for the detection of anisotropic
features such cosmic strings in the cosmic microwave background.
We review in this paper a range of methods based on sparsity that have been
proposed for astronomical data analysis. We also discuss what is the impact of
Compressed Sensing, the new sampling theory, in astronomy for collecting the
data, transferring them to the earth or reconstructing an image from incomplete
measurements.Comment: Submitted. Full paper will figures available at
http://jstarck.free.fr/IEEE09_SparseAstro.pd
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
SurfelMeshing: Online Surfel-Based Mesh Reconstruction
We address the problem of mesh reconstruction from live RGB-D video, assuming
a calibrated camera and poses provided externally (e.g., by a SLAM system). In
contrast to most existing approaches, we do not fuse depth measurements in a
volume but in a dense surfel cloud. We asynchronously (re)triangulate the
smoothed surfels to reconstruct a surface mesh. This novel approach enables to
maintain a dense surface representation of the scene during SLAM which can
quickly adapt to loop closures. This is possible by deforming the surfel cloud
and asynchronously remeshing the surface where necessary. The surfel-based
representation also naturally supports strongly varying scan resolution. In
particular, it reconstructs colors at the input camera's resolution. Moreover,
in contrast to many volumetric approaches, ours can reconstruct thin objects
since objects do not need to enclose a volume. We demonstrate our approach in a
number of experiments, showing that it produces reconstructions that are
competitive with the state-of-the-art, and we discuss its advantages and
limitations. The algorithm (excluding loop closure functionality) is available
as open source at https://github.com/puzzlepaint/surfelmeshing .Comment: Version accepted to IEEE Transactions on Pattern Analysis and Machine
Intelligenc
A Dual Sensor Computational Camera for High Quality Dark Videography
Videos captured under low light conditions suffer from severe noise. A
variety of efforts have been devoted to image/video noise suppression and made
large progress. However, in extremely dark scenarios, extensive photon
starvation would hamper precise noise modeling. Instead, developing an imaging
system collecting more photons is a more effective way for high-quality video
capture under low illuminations. In this paper, we propose to build a
dual-sensor camera to additionally collect the photons in NIR wavelength, and
make use of the correlation between RGB and near-infrared (NIR) spectrum to
perform high-quality reconstruction from noisy dark video pairs. In hardware,
we build a compact dual-sensor camera capturing RGB and NIR videos
simultaneously. Computationally, we propose a dual-channel multi-frame
attention network (DCMAN) utilizing spatial-temporal-spectral priors to
reconstruct the low-light RGB and NIR videos. In addition, we build a
high-quality paired RGB and NIR video dataset, based on which the approach can
be applied to different sensors easily by training the DCMAN model with
simulated noisy input following a physical-process-based CMOS noise model. Both
experiments on synthetic and real videos validate the performance of this
compact dual-sensor camera design and the corresponding reconstruction
algorithm in dark videography
- …