264 research outputs found
Stereoscopic high dynamic range imaging
Two modern technologies show promise to dramatically increase immersion in
virtual environments. Stereoscopic imaging captures two images representing
the views of both eyes and allows for better depth perception. High dynamic
range (HDR) imaging accurately represents real world lighting as opposed to
traditional low dynamic range (LDR) imaging. HDR provides a better contrast
and more natural looking scenes. The combination of the two technologies in
order to gain advantages of both has been, until now, mostly unexplored due to
the current limitations in the imaging pipeline. This thesis reviews both fields,
proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the
challenges that need to be resolved to enable SHDR and focuses on capture and
compression aspects of that pipeline.
The problems of capturing SHDR images that would potentially require two
HDR cameras and introduce ghosting, are mitigated by capturing an HDR and
LDR pair and using it to generate SHDR images. A detailed user study compared
four different methods of generating SHDR images. Results demonstrated that
one of the methods may produce images perceptually indistinguishable from the
ground truth.
Insights obtained while developing static image operators guided the design
of SHDR video techniques. Three methods for generating SHDR video from an
HDR-LDR video pair are proposed and compared to the ground truth SHDR
videos. Results showed little overall error and identified a method with the least
error.
Once captured, SHDR content needs to be efficiently compressed. Five SHDR
compression methods that are backward compatible are presented. The proposed
methods can encode SHDR content to little more than that of a traditional single
LDR image (18% larger for one method) and the backward compatibility property
encourages early adoption of the format.
The work presented in this thesis has introduced and advanced capture and
compression methods for the adoption of SHDR imaging. In general, this research
paves the way for a novel field of SHDR imaging which should lead to improved
and more realistic representation of captured scenes
Algorithms for the enhancement of dynamic range and colour constancy of digital images & video
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities.
Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities.
The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises.
The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits
us to embed this technology in practical vision systems with little adaptation of the existing
solutions. In this communication, we define an imaging pipeline that permits high dynamic range
(HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation
of this pipeline on a prototype sensor and evaluate the quality of our implementation results on
real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in
particular we solve the problem of noise generated by the lack o
Robust estimation of exposure ratios in multi-exposure image stacks
Merging multi-exposure image stacks into a high dynamic range (HDR) image
requires knowledge of accurate exposure times. When exposure times are
inaccurate, for example, when they are extracted from a camera's EXIF metadata,
the reconstructed HDR images reveal banding artifacts at smooth gradients. To
remedy this, we propose to estimate exposure ratios directly from the input
images. We derive the exposure time estimation as an optimization problem, in
which pixels are selected from pairs of exposures to minimize estimation error
caused by camera noise. When pixel values are represented in the logarithmic
domain, the problem can be solved efficiently using a linear solver. We
demonstrate that the estimation can be easily made robust to pixel misalignment
caused by camera or object motion by collecting pixels from multiple spatial
tiles. The proposed automatic exposure estimation and alignment eliminates
banding artifacts in popular datasets and is essential for applications that
require physically accurate reconstructions, such as measuring the modulation
transfer function of a display. The code for the method is available.Comment: 11 pages, 11 figures, journa
Model-Based Environmental Visual Perception for Humanoid Robots
The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
Objective and subjective assessment of perceptual factors in HDR content processing
The development of the display and camera technology makes high dynamic range (HDR) image become more and more popular. High dynamic range image give us pleasant image which has more details that makes high dynamic range image has good quality. This paper shows us the some important techniques in HDR images. And it also presents the work the author did. The paper is formed of three parts. The first part is an introduction of HDR image. From this part we can know why HDR image has good quality
Real-Time Algorithms for High Dynamic Range Video
A recurring problem in capturing video is the scene having a range of brightness values that exceeds the capabilities of the capturing device.
An example would be a video camera in a bright outside area, directed at the entrance of a building.
Because of the potentially big brightness difference, it may not be possible to capture details of the inside of the building and the outside simultaneously using just one shutter speed setting.
This results in under- and overexposed pixels in the video footage.
The approach we follow in this thesis to overcome this problem is temporal exposure bracketing, i.e., using a set of images captured in quick sequence at different shutter settings.
Each image then captures one facet of the scene's brightness range.
When fused together, a high dynamic range (HDR) video frame is created that reveals details in dark and bright regions simultaneously.
The process of creating a frame in an HDR video can be thought of as a pipeline where the output of each step is the input to the subsequent one.
It begins by capturing a set of regular images using varying shutter speeds.
Next, the images are aligned with respect to each other to compensate for camera and scene motion during capture.
The aligned images are then merged together to create a single HDR frame containing accurate brightness values of the entire scene.
As a last step, the HDR frame is tone mapped in order to be displayable on a regular screen with a lower dynamic range.
This thesis covers algorithms for these steps that allow the creation of HDR video in real-time.
When creating videos instead of still images, the focus lies on high capturing and processing speed and on assuring temporal consistency between the video frames.
In order to achieve this goal, we take advantage of the knowledge gained from the processing of previous frames in the video.
This work addresses the following aspects in particular.
The image size parameters for the set of base images are chosen such that only as little image data as possible is captured.
We make use of the fact that it is not always necessary to capture full size images when only small portions of the scene require HDR.
Avoiding redundancy in the image material is an obvious approach to reducing the overall time taken to generate a frame.
With the aid of the previous frames, we calculate brightness statistics of the scene.
The exposure values are chosen in a way, such that frequently occurring brightness values are well-exposed in at least one of the images in the sequence.
The base images from which the HDR frame is created are captured in quick succession.
The effects of intermediate camera motion are thus less intense than in the still image case, and a comparably simpler camera motion model can be used.
At the same time, however, there is much less time available to estimate motion.
For this reason, we use a fast heuristic that makes use of the motion information obtained in previous frames.
It is robust to the large brightness difference between the images of an exposure sequence.
The range of luminance values of an HDR frame must be tone mapped to the displayable range of the output device.
Most available tone mapping operators are designed for still images and scale the dynamic range of each frame independently.
In situations where the scene's brightness statistics change quickly, these operators produce visible image flicker.
We have developed an algorithm that detects such situations in an HDR video.
Based on this detection, a temporal stability criterion for the tone mapping parameters then prevents image flicker.
All methods for capture, creation and display of HDR video introduced in this work have been fully implemented, tested and integrated into a running HDR video system.
The algorithms were analyzed for parallelizability and, if applicable, adjusted and implemented on a high-performance graphics chip
A simplified HDR image processing pipeline for digital photography
High Dynamic Range (HDR) imaging has revolutionized the digital imaging. It allows
capture, storage, manipulation, and display of full dynamic range of the captured scene.
As a result, it has spawned whole new possibilities for digital photography, from photorealistic
to hyper-real. With all these advantages, the technique is expected to replace
the conventional 8-bit Low Dynamic Range (LDR) imaging in the future. However,
HDR results in an even more complex imaging pipeline including new techniques for
capturing, encoding, and displaying images. The goal of this thesis is to bridge the
gap between conventional imaging pipeline to the HDR’s in as simple a way as possible.
We make three contributions. First we show that a simple extension of gamma
encoding suffices as a representation to store HDR images. Second, gamma as a control
for image contrast can be ‘optimally’ tuned on a per image basis. Lastly, we show
a general tone curve, with detail preservation, suffices to tone map an image (there is
only a limited need for the expensive spatially varying tone mappers). All three of our
contributions are evaluated psychophysically. Together they support our general thesis
that an HDR workflow, similar to that already used in photography, might be used. This
said, we believe the adoption of HDR into photography is, perhaps, less difficult than it
is sometimes posed to be
- …