4,934 research outputs found
A Perceptually Optimized and Self-Calibrated Tone Mapping Operator
With the increasing popularity and accessibility of high dynamic range (HDR)
photography, tone mapping operators (TMOs) for dynamic range compression are
practically demanding. In this paper, we develop a two-stage neural
network-based TMO that is self-calibrated and perceptually optimized. In Stage
one, motivated by the physiology of the early stages of the human visual
system, we first decompose an HDR image into a normalized Laplacian pyramid. We
then use two lightweight deep neural networks (DNNs), taking the normalized
representation as input and estimating the Laplacian pyramid of the
corresponding LDR image. We optimize the tone mapping network by minimizing the
normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with
human judgments of tone-mapped image quality. In Stage two, the input HDR image
is self-calibrated to compute the final LDR image. We feed the same HDR image
but rescaled with different maximum luminances to the learned tone mapping
network, and generate a pseudo-multi-exposure image stack with different detail
visibility and color saturation. We then train another lightweight DNN to fuse
the LDR image stack into a desired LDR image by maximizing a variant of the
structural similarity index for multi-exposure image fusion (MEF-SSIM), which
has been proven perceptually relevant to fused image quality. The proposed
self-calibration mechanism through MEF enables our TMO to accept uncalibrated
HDR images, while being physiology-driven. Extensive experiments show that our
method produces images with consistently better visual quality. Additionally,
since our method builds upon three lightweight DNNs, it is among the fastest
local TMOs.Comment: 20 pages,18 figure
Non-Iterative Tone Mapping With High Efficiency and Robustness
This paper proposes an efficient approach for tone mapping, which provides a high perceptual image quality for diverse scenes. Most existing methods, optimizing images for the perceptual model, use an iterative process and this process is time consuming. To solve this problem, we proposed a new layer-based non-iterative approach to finding an optimal detail layer for generating a tone-mapped image. The proposed method consists of the following three steps. First, an image is decomposed into a base layer and a detail layer to separate the illumination and detail components. Next, the base layer is globally compressed by applying the statistical naturalness model based on the statistics of the luminance and contrast in the natural scenes. The detail layer is locally optimized based on the structure fidelity measure, representing the degree of local structural detail preservation. Finally, the proposed method constructs the final tone-mapped image by combining the resultant layers. The performance evaluation reveals that the proposed method outperforms the benchmarking methods for almost all the benchmarking test images. Specifically, the proposed method improves an average tone mapping quality index-II (TMQI-II), a feature similarity index for tone-mapped images (FSITM), and a high-dynamic range-visible difference predictor (HDR-VDP)-2.2 by up to 0.651 (223.4%), 0.088 (11.5%), and 10.371 (25.2%), respectively, compared with the benchmarking methods, whereas it improves the processing speed by over 2611 times. Furthermore, the proposed method decreases the standard deviations of TMQI-II, FSITM, and HDR-VDP-2.2, and processing time by up to 81.4%, 18.9%, 12.6%, and 99.9%, respectively, when compared with the benchmarking methods.11Ysciescopu
Live User-guided Intrinsic Video For Static Scenes
We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance
- …