90 research outputs found
Robust Multi-Image HDR Reconstruction for the Modulo Camera
Photographing scenes with high dynamic range (HDR) poses great challenges to
consumer cameras with their limited sensor bit depth. To address this, Zhao et
al. recently proposed a novel sensor concept - the modulo camera - which
captures the least significant bits of the recorded scene instead of going into
saturation. Similar to conventional pipelines, HDR images can be reconstructed
from multiple exposures, but significantly fewer images are needed than with a
typical saturating sensor. While the concept is appealing, we show that the
original reconstruction approach assumes noise-free measurements and quickly
breaks down otherwise. To address this, we propose a novel reconstruction
algorithm that is robust to image noise and produces significantly fewer
artifacts. We theoretically analyze correctness as well as limitations, and
show that our approach significantly outperforms the baseline on real data.Comment: to appear at the 39th German Conference on Pattern Recognition (GCPR)
201
Reconstruction from Periodic Nonlinearities, With Applications to HDR Imaging
We consider the problem of reconstructing signals and images from periodic
nonlinearities. For such problems, we design a measurement scheme that supports
efficient reconstruction; moreover, our method can be adapted to extend to
compressive sensing-based signal and image acquisition systems. Our techniques
can be potentially useful for reducing the measurement complexity of high
dynamic range (HDR) imaging systems, with little loss in reconstruction
quality. Several numerical experiments on real data demonstrate the
effectiveness of our approach
MantissaCam: Learning Snapshot High-dynamic-range Imaging with Perceptually-based In-pixel Irradiance Encoding
The ability to image high-dynamic-range (HDR) scenes is crucial in many
computer vision applications. The dynamic range of conventional sensors,
however, is fundamentally limited by their well capacity, resulting in
saturation of bright scene parts. To overcome this limitation, emerging sensors
offer in-pixel processing capabilities to encode the incident irradiance. Among
the most promising encoding schemes is modulo wrapping, which results in a
computational photography problem where the HDR scene is computed by an
irradiance unwrapping algorithm from the wrapped low-dynamic-range (LDR) sensor
image. Here, we design a neural network--based algorithm that outperforms
previous irradiance unwrapping methods and, more importantly, we design a
perceptually inspired "mantissa" encoding scheme that more efficiently wraps an
HDR scene into an LDR sensor. Combined with our reconstruction framework,
MantissaCam achieves state-of-the-art results among modulo-type snapshot HDR
imaging approaches. We demonstrate the efficacy of our method in simulation and
show preliminary results of a prototype MantissaCam implemented with a
programmable sensor
Snapshot High Dynamic Range Imaging with a Polarization Camera
High dynamic range (HDR) images are important for a range of tasks, from
navigation to consumer photography. Accordingly, a host of specialized HDR
sensors have been developed, the most successful of which are based on
capturing variable per-pixel exposures. In essence, these methods capture an
entire exposure bracket sequence at once in a single shot. This paper presents
a straightforward but highly effective approach for turning an off-the-shelf
polarization camera into a high-performance HDR camera. By placing a linear
polarizer in front of the polarization camera, we are able to simultaneously
capture four images with varied exposures, which are determined by the
orientation of the polarizer. We develop an outlier-robust and self-calibrating
algorithm to reconstruct an HDR image (at a single polarity) from these
measurements. Finally, we demonstrate the efficacy of our approach with
extensive real-world experiments.Comment: 9 pages, 10 figure
Exposure Fusion for Hand-held Camera Inputs with Optical Flow and PatchMatch
This paper proposes a hybrid synthesis method for multi-exposure image fusion
taken by hand-held cameras. Motions either due to the shaky camera or caused by
dynamic scenes should be compensated before any content fusion. Any
misalignment can easily cause blurring/ghosting artifacts in the fused result.
Our hybrid method can deal with such motions and maintain the exposure
information of each input effectively. In particular, the proposed method first
applies optical flow for a coarse registration, which performs well with
complex non-rigid motion but produces deformations at regions with missing
correspondences. The absence of correspondences is due to the occlusions of
scene parallax or the moving contents. To correct such error registration, we
segment images into superpixels and identify problematic alignments based on
each superpixel, which is further aligned by PatchMatch. The method combines
the efficiency of optical flow and the accuracy of PatchMatch. After PatchMatch
correction, we obtain a fully aligned image stack that facilitates a
high-quality fusion that is free from blurring/ghosting artifacts. We compare
our method with existing fusion algorithms on various challenging examples,
including the static/dynamic, the indoor/outdoor and the daytime/nighttime
scenes. Experiment results demonstrate the effectiveness and robustness of our
method
MELON: NeRF with Unposed Images Using Equivalence Class Estimation
Neural radiance fields enable novel-view synthesis and scene reconstruction
with photorealistic quality from a few images, but require known and accurate
camera poses. Conventional pose estimation algorithms fail on smooth or
self-similar scenes, while methods performing inverse rendering from unposed
views require a rough initialization of the camera orientations. The main
difficulty of pose estimation lies in real-life objects being almost invariant
under certain transformations, making the photometric distance between rendered
views non-convex with respect to the camera parameters. Using an equivalence
relation that matches the distribution of local minima in camera space, we
reduce this space to its quotient set, in which pose estimation becomes a more
convex problem. Using a neural-network to regularize pose estimation, we
demonstrate that our method - MELON - can reconstruct a neural radiance field
from unposed images with state-of-the-art accuracy while requiring ten times
fewer views than adversarial approaches
Recommended from our members
Assessment of multi-exposure HDR image deghosting methods
© 2017 Elsevier LtdTo avoid motion artefacts when merging multiple exposures into a high dynamic range image, a number of HDR deghosting algorithms have been proposed. However, these algorithms do not work equally well on all types of scenes, and some may even introduce additional artefacts. As the number of proposed deghosting methods is increasing rapidly, there is an immediate need to evaluate them and compare their results. Even though subjective methods of evaluation provide reliable means of testing, they are often cumbersome and need to be repeated for each new proposed method or even its slight modification. Because of that, there is a need for objective quality metrics that will provide automatic means of evaluation of HDR deghosting algorithms. In this work, we explore several computational approaches of quantitative evaluation of multi-exposure HDR deghosting algorithms and demonstrate their results on five state-of-the-art algorithms. In order to perform a comprehensive evaluation, a new dataset consisting of 36 scenes has been created, where each scene provides a different challenge for a deghosting algorithm. The quality of HDR images produced by deghosting method is measured in a subjective experiment and then evaluated using objective metrics. As this paper is an extension of our conference paper, we add one more objective quality metric, UDQM, as an additional metric in the evaluation. Furthermore, analysis of objective and subjective experiments is performed and explained more extensively in this work. By testing correlation between objective metric and subjective scores, the results show that from the tested metrics, that HDR-VDP-2 is the most reliable metric for evaluating HDR deghosting algorithms. The results also show that for most of the tested scenes, Sen et al.'s deghosting method outperforms other evaluated deghosting methods. The observations based on the obtained results can be used as a vital guide in the development of new HDR deghosting algorithms, which would be robust to a variety of scenes and could produce high quality results
- …