1,113 research outputs found
An adaptive true motion estimation algorithm for frame rate up-conversion and its hardware design
With the advancement in video and display technologies, recently flat panel High Definition Television (HDTV) displays with 100 Hz, 120 Hz and most recently 240 Hz picture rates are introduced. However, video materials are captured and broadcast in different temporal resolutions ranging from 24 Hz to 60 Hz. In order to display these video formats correctly on high picture rate displays, new frames should be generated and inserted into the original video sequence to increase its frame rate. Therefore, Frame Rate Up-Conversion (FRUC) has become a necessity. Motion Compensated FRUC algorithms provide better quality results than non-motion compensated FRUC algorithms. Motion Estimation (ME) is the process of finding motion vectors which describe the motion of the objects between adjacent frames and is the most computationally intensive part of motion compensated FRUC algorithms. For FRUC applications, it is important to find the motion vectors that represent real motions of the objects which is called true ME. In this thesis, an Adaptive True Motion Estimation (ATME) algorithm is proposed. ATME algorithm produces similar quality results with less number of calculations or better quality results with similar number of calculations compared to 3-D Recursive Search true ME algorithm by adaptively using optimized sets of candidate search locations and several redundancy removal techniques. In addition, 3 different complexity hardware architectures for ATME are proposed. The proposed hardware use efficient data re-use schemes for the non-regular data flow of ATME algorithm. 2 of these hardware architectures are implemented on Xilinx Virtex-4 FPGA and are capable of processing ~158 and ~168 720p HD frames per second respectively
Transformation-aware Perceptual Image Metric
Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax
Colour videos with depth : acquisition, processing and evaluation
The human visual system lets us perceive the world around us in three dimensions
by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models,
which provide a wealth of information about represented objects, such as depth and
surface normals. Videos do not contain this information, but only provide per-pixel
colour information. In this dissertation, I hence investigate a combination of videos
and geometric models: videos with per-pixel depth (also known as
RGBZ videos).
I consider the full life cycle of these videos: from their acquisition, via filtering and
processing, to stereoscopic display.
I propose two approaches to capture videos with depth. The first is a spatiotemporal
stereo matching approach based on the dual-cross-bilateral grid – a novel real-time
technique derived by accelerating a reformulation of an existing stereo matching
approach. This is the basis for an extension which incorporates temporal evidence in
real time, resulting in increased temporal coherence of disparity maps – particularly
in the presence of image noise.
The second acquisition approach is a sensor fusion system which combines data
from a noisy, low-resolution time-of-flight camera and a high-resolution colour
video camera into a coherent, noise-free video with depth. The system consists
of a three-step pipeline that aligns the video streams, efficiently removes and fills
invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the
spatial resolution of the depth data and strongly reduce depth measurement noise.
I show that these videos with depth empower a range of video processing effects
that are not achievable using colour video alone. These effects critically rely on the
geometric information, like a proposed video relighting technique which requires
high-quality surface normals to produce plausible results. In addition, I demonstrate
enhanced non-photorealistic rendering techniques and the ability to synthesise
stereoscopic videos, which allows these effects to be applied stereoscopically.
These stereoscopic renderings inspired me to study stereoscopic viewing discomfort.
The result of this is a surprisingly simple computational model that predicts the
visual comfort of stereoscopic images. I validated this model using a perceptual
study, which showed that it correlates strongly with human comfort ratings. This
makes it ideal for automatic comfort assessment, without the need for costly and
lengthy perceptual studies
Stereoscopic high dynamic range imaging
Two modern technologies show promise to dramatically increase immersion in
virtual environments. Stereoscopic imaging captures two images representing
the views of both eyes and allows for better depth perception. High dynamic
range (HDR) imaging accurately represents real world lighting as opposed to
traditional low dynamic range (LDR) imaging. HDR provides a better contrast
and more natural looking scenes. The combination of the two technologies in
order to gain advantages of both has been, until now, mostly unexplored due to
the current limitations in the imaging pipeline. This thesis reviews both fields,
proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the
challenges that need to be resolved to enable SHDR and focuses on capture and
compression aspects of that pipeline.
The problems of capturing SHDR images that would potentially require two
HDR cameras and introduce ghosting, are mitigated by capturing an HDR and
LDR pair and using it to generate SHDR images. A detailed user study compared
four different methods of generating SHDR images. Results demonstrated that
one of the methods may produce images perceptually indistinguishable from the
ground truth.
Insights obtained while developing static image operators guided the design
of SHDR video techniques. Three methods for generating SHDR video from an
HDR-LDR video pair are proposed and compared to the ground truth SHDR
videos. Results showed little overall error and identified a method with the least
error.
Once captured, SHDR content needs to be efficiently compressed. Five SHDR
compression methods that are backward compatible are presented. The proposed
methods can encode SHDR content to little more than that of a traditional single
LDR image (18% larger for one method) and the backward compatibility property
encourages early adoption of the format.
The work presented in this thesis has introduced and advanced capture and
compression methods for the adoption of SHDR imaging. In general, this research
paves the way for a novel field of SHDR imaging which should lead to improved
and more realistic representation of captured scenes
Sparse variational regularization for visual motion estimation
The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions
- …