293 research outputs found
Exposure Fusion for Hand-held Camera Inputs with Optical Flow and PatchMatch
This paper proposes a hybrid synthesis method for multi-exposure image fusion
taken by hand-held cameras. Motions either due to the shaky camera or caused by
dynamic scenes should be compensated before any content fusion. Any
misalignment can easily cause blurring/ghosting artifacts in the fused result.
Our hybrid method can deal with such motions and maintain the exposure
information of each input effectively. In particular, the proposed method first
applies optical flow for a coarse registration, which performs well with
complex non-rigid motion but produces deformations at regions with missing
correspondences. The absence of correspondences is due to the occlusions of
scene parallax or the moving contents. To correct such error registration, we
segment images into superpixels and identify problematic alignments based on
each superpixel, which is further aligned by PatchMatch. The method combines
the efficiency of optical flow and the accuracy of PatchMatch. After PatchMatch
correction, we obtain a fully aligned image stack that facilitates a
high-quality fusion that is free from blurring/ghosting artifacts. We compare
our method with existing fusion algorithms on various challenging examples,
including the static/dynamic, the indoor/outdoor and the daytime/nighttime
scenes. Experiment results demonstrate the effectiveness and robustness of our
method
Training Implicit Networks for Image Deblurring using Jacobian-Free Backpropagation
Recent efforts in applying implicit networks to solve inverse problems in
imaging have achieved competitive or even superior results when compared to
feedforward networks. These implicit networks only require constant memory
during backpropagation, regardless of the number of layers. However, they are
not necessarily easy to train. Gradient calculations are computationally
expensive because they require backpropagating through a fixed point. In
particular, this process requires solving a large linear system whose size is
determined by the number of features in the fixed point iteration. This paper
explores a recently proposed method, Jacobian-free Backpropagation (JFB), a
backpropagation scheme that circumvents such calculation, in the context of
image deblurring problems. Our results show that JFB is comparable against
fine-tuned optimization schemes, state-of-the-art (SOTA) feedforward networks,
and existing implicit networks at a reduced computational cost
GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning
Existing optical flow methods are erroneous in challenging scenes, such as
fog, rain, and night because the basic optical flow assumptions such as
brightness and gradient constancy are broken. To address this problem, we
present an unsupervised learning approach that fuses gyroscope into optical
flow learning. Specifically, we first convert gyroscope readings into motion
fields named gyro field. Then, we design a self-guided fusion module to fuse
the background motion extracted from the gyro field with the optical flow and
guide the network to focus on motion details. To the best of our knowledge,
this is the first deep learning-based framework that fuses gyroscope data and
image content for optical flow learning. To validate our method, we propose a
new dataset that covers regular and challenging scenes. Experiments show that
our method outperforms the state-of-art methods in both regular and challenging
scenes
Low-Light Image Enhancement with Illumination-Aware Gamma Correction and Complete Image Modelling Network
This paper presents a novel network structure with illumination-aware gamma
correction and complete image modelling to solve the low-light image
enhancement problem. Low-light environments usually lead to less informative
large-scale dark areas, directly learning deep representations from low-light
images is insensitive to recovering normal illumination. We propose to
integrate the effectiveness of gamma correction with the strong modelling
capacities of deep networks, which enables the correction factor gamma to be
learned in a coarse to elaborate manner via adaptively perceiving the deviated
illumination. Because exponential operation introduces high computational
complexity, we propose to use Taylor Series to approximate gamma correction,
accelerating the training and inference speed. Dark areas usually occupy large
scales in low-light images, common local modelling structures, e.g., CNN,
SwinIR, are thus insufficient to recover accurate illumination across whole
low-light images. We propose a novel Transformer block to completely simulate
the dependencies of all pixels across images via a local-to-global hierarchical
attention mechanism, so that dark areas could be inferred by borrowing the
information from far informative regions in a highly effective manner.
Extensive experiments on several benchmark datasets demonstrate that our
approach outperforms state-of-the-art methods.Comment: Accepted by ICCV 202
- …
