341 research outputs found
Põhjalik uuring ülisuure dünaamilise ulatusega piltide toonivastendamisest koos subjektiivsete testidega
A high dynamic range (HDR) image has a very wide range of luminance levels that
traditional low dynamic range (LDR) displays cannot visualize. For this reason, HDR
images are usually transformed to 8-bit representations, so that the alpha channel for
each pixel is used as an exponent value, sometimes referred to as exponential notation
[43]. Tone mapping operators (TMOs) are used to transform high dynamic range to
low dynamic range domain by compressing pixels so that traditional LDR display can
visualize them. The purpose of this thesis is to identify and analyse differences and
similarities between the wide range of tone mapping operators that are available in the
literature. Each TMO has been analyzed using subjective studies considering different
conditions, which include environment, luminance, and colour. Also, several inverse
tone mapping operators, HDR mappings with exposure fusion, histogram adjustment,
and retinex have been analysed in this study. 19 different TMOs have been examined
using a variety of HDR images. Mean opinion score (MOS) is calculated on those selected
TMOs by asking the opinion of 25 independent people considering candidates’
age, vision, and colour blindness
Hardware-based smart camera for recovering high dynamic range video from multiple exposures
International audienceIn many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and rep- resentation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 mega- pixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and sur- veillanc
Division Gets Better: Learning Brightness-Aware and Detail-Sensitive Representations for Low-Light Image Enhancement
Low-light image enhancement strives to improve the contrast, adjust the
visibility, and restore the distortion in color and texture. Existing methods
usually pay more attention to improving the visibility and contrast via
increasing the lightness of low-light images, while disregarding the
significance of color and texture restoration for high-quality images. Against
above issue, we propose a novel luminance and chrominance dual branch network,
termed LCDBNet, for low-light image enhancement, which divides low-light image
enhancement into two sub-tasks, e.g., luminance adjustment and chrominance
restoration. Specifically, LCDBNet is composed of two branches, namely
luminance adjustment network (LAN) and chrominance restoration network (CRN).
LAN takes responsibility for learning brightness-aware features leveraging
long-range dependency and local attention correlation. While CRN concentrates
on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to
produce visually impressive images. Extensive experiments conducted on seven
benchmark datasets validate the effectiveness of our proposed LCDBNet, and the
results manifest that LCDBNet achieves superior performance in terms of
multiple reference/non-reference quality evaluators compared to other
state-of-the-art competitors. Our code and pretrained model will be available.Comment: 14 pages, 16 figure
Stereoscopic high dynamic range imaging
Two modern technologies show promise to dramatically increase immersion in
virtual environments. Stereoscopic imaging captures two images representing
the views of both eyes and allows for better depth perception. High dynamic
range (HDR) imaging accurately represents real world lighting as opposed to
traditional low dynamic range (LDR) imaging. HDR provides a better contrast
and more natural looking scenes. The combination of the two technologies in
order to gain advantages of both has been, until now, mostly unexplored due to
the current limitations in the imaging pipeline. This thesis reviews both fields,
proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the
challenges that need to be resolved to enable SHDR and focuses on capture and
compression aspects of that pipeline.
The problems of capturing SHDR images that would potentially require two
HDR cameras and introduce ghosting, are mitigated by capturing an HDR and
LDR pair and using it to generate SHDR images. A detailed user study compared
four different methods of generating SHDR images. Results demonstrated that
one of the methods may produce images perceptually indistinguishable from the
ground truth.
Insights obtained while developing static image operators guided the design
of SHDR video techniques. Three methods for generating SHDR video from an
HDR-LDR video pair are proposed and compared to the ground truth SHDR
videos. Results showed little overall error and identified a method with the least
error.
Once captured, SHDR content needs to be efficiently compressed. Five SHDR
compression methods that are backward compatible are presented. The proposed
methods can encode SHDR content to little more than that of a traditional single
LDR image (18% larger for one method) and the backward compatibility property
encourages early adoption of the format.
The work presented in this thesis has introduced and advanced capture and
compression methods for the adoption of SHDR imaging. In general, this research
paves the way for a novel field of SHDR imaging which should lead to improved
and more realistic representation of captured scenes
CNN Injected Transformer for Image Exposure Correction
Capturing images with incorrect exposure settings fails to deliver a
satisfactory visual experience. Only when the exposure is properly set, can the
color and details of the images be appropriately preserved. Previous exposure
correction methods based on convolutions often produce exposure deviation in
images as a consequence of the restricted receptive field of convolutional
kernels. This issue arises because convolutions are not capable of capturing
long-range dependencies in images accurately. To overcome this challenge, we
can apply the Transformer to address the exposure correction problem,
leveraging its capability in modeling long-range dependencies to capture global
representation. However, solely relying on the window-based Transformer leads
to visually disturbing blocking artifacts due to the application of
self-attention in small patches. In this paper, we propose a CNN Injected
Transformer (CIT) to harness the individual strengths of CNN and Transformer
simultaneously. Specifically, we construct the CIT by utilizing a window-based
Transformer to exploit the long-range interactions among different regions in
the entire image. Within each CIT block, we incorporate a channel attention
block (CAB) and a half-instance normalization block (HINB) to assist the
window-based self-attention to acquire the global statistics and refine local
features. In addition to the hybrid architecture design for exposure
correction, we apply a set of carefully formulated loss functions to improve
the spatial coherence and rectify potential color deviations. Extensive
experiments demonstrate that our image exposure correction method outperforms
state-of-the-art approaches in terms of both quantitative and qualitative
metrics
Inverse tone mapping
The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty
in Imaging that can be compared to the introduction of colour photography or even more.
Light can now be captured, stored, processed, and finally visualised without losing information.
Moreover, new applications that can exploit physical values of the light have been introduced
such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However,
these new processing and visualisation techniques cannot be applied to movies and pictures
that have been produced by photography and cinematography in more than one hundred years.
This thesis introduces a general framework for expanding legacy content into High Dynamic
Range content. The expansion is achieved avoiding artefacts, producing images suitable for
visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology
based on psychophysical experiments and computational metrics to measure performances of
expansion algorithms. Finally, a compression scheme, inspired by the framework, for High
Dynamic Range Textures, is proposed and evaluated
- …