667 research outputs found
A robust patch-based synthesis framework for combining inconsistent images
Current methods for combining different images produce visible artifacts when the sources have very different textures and structures, come from far view points, or capture dynamic scenes with motions. In this thesis, we propose a patch-based synthesis algorithm to plausibly combine different images that have color, texture, structural, and geometric inconsistencies. For some applications such as cloning and stitching where a gradual blend is required, we present a new method for synthesizing a transition region between two source images, such that inconsistent properties change gradually from one source to the other. We call this process image melding. For gradual blending, we generalized patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. Third, we propose a new energy based on mixed L2/L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. We also demonstrate another application which requires us to address inconsistencies across the images: high dynamic range (HDR) reconstruction using sequential exposures. In this application, the results will suffer from objectionable artifacts for dynamic scenes if the inconsistencies caused by significant scene motions are not handled properly. In this thesis, we propose a new approach to HDR reconstruction that uses information in all exposures while being more robust to motion than previous techniques. Our algorithm is based on a novel patch-based energy-minimization formulation that integrates alignment and reconstruction in a joint optimization through an equation we call the HDR image synthesis equation. This allows us to produce an HDR result that is aligned to one of the exposures yet contains information from all of them. These two applications (image melding and high dynamic range reconstruction) show that patch based methods like the one proposed in this dissertation can address inconsistent images and could open the door to many new image editing applications in the future
Locally Non-rigid Registration for Mobile HDR Photography
Image registration for stack-based HDR photography is challenging. If not
properly accounted for, camera motion and scene changes result in artifacts in
the composite image. Unfortunately, existing methods to address this problem
are either accurate, but too slow for mobile devices, or fast, but prone to
failing. We propose a method that fills this void: our approach is extremely
fast---under 700ms on a commercial tablet for a pair of 5MP images---and
prevents the artifacts that arise from insufficient registration quality
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Recently, impressive denoising results have been achieved by Bayesian
approaches which assume Gaussian models for the image patches. This improvement
in performance can be attributed to the use of per-patch models. Unfortunately
such an approach is particularly unstable for most inverse problems beyond
denoising. In this work, we propose the use of a hyperprior to model image
patches, in order to stabilize the estimation procedure. There are two main
advantages to the proposed restoration scheme: Firstly it is adapted to
diagonal degradation matrices, and in particular to missing data problems (e.g.
inpainting of missing pixels or zooming). Secondly it can deal with signal
dependent noise models, particularly suited to digital cameras. As such, the
scheme is especially adapted to computational photography. In order to
illustrate this point, we provide an application to high dynamic range imaging
from a single image taken with a modified sensor, which shows the effectiveness
of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints.
Full size images are available as HAL technical report hal-01107519v5, IEEE
Transactions on Computational Imaging, 201
DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs
We present a novel deep learning architecture for fusing static
multi-exposure images. Current multi-exposure fusion (MEF) approaches use
hand-crafted features to fuse input sequence. However, the weak hand-crafted
representations are not robust to varying input conditions. Moreover, they
perform poorly for extreme exposure image pairs. Thus, it is highly desirable
to have a method that is robust to varying input conditions and capable of
handling extreme exposure without artifacts. Deep representations have known to
be robust to input conditions and have shown phenomenal performance in a
supervised setting. However, the stumbling block in using deep learning for MEF
was the lack of sufficient training data and an oracle to provide the
ground-truth for supervision. To address the above issues, we have gathered a
large dataset of multi-exposure image stacks for training and to circumvent the
need for ground truth images, we propose an unsupervised deep learning
framework for MEF utilizing a no-reference quality metric as loss function. The
proposed approach uses a novel CNN architecture trained to learn the fusion
operation without reference ground truth image. The model fuses a set of common
low level features extracted from each image to generate artifact-free
perceptually pleasing results. We perform extensive quantitative and
qualitative evaluation and show that the proposed technique outperforms
existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
- …