157 research outputs found
Algorithms for compression of high dynamic range images and video
The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1.
Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment.
The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems.
Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform.
In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented
Recommended from our members
Scalable and network aware video coding for advanced communications over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service.
In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques.
To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission.
Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service.
To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream
Video Traffic Characteristics of Modern Encoding Standards: H.264/AVC with SVC and MVC Extensions and H.265/HEVC
abstract: Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.View the article as published at https://www.hindawi.com/journals/tswj/2014/189481
Surveillance centric coding
PhDThe research work presented in this thesis focuses on the development of techniques
specific to surveillance videos for efficient video compression with higher processing
speed. The Scalable Video Coding (SVC) techniques are explored to achieve higher
compression efficiency. The framework of SVC is modified to support Surveillance
Centric Coding (SCC). Motion estimation techniques specific to surveillance videos
are proposed in order to speed up the compression process of the SCC.
The main contributions of the research work presented in this thesis are divided into
two groups (i) Efficient Compression and (ii) Efficient Motion Estimation. The
paradigm of Surveillance Centric Coding (SCC) is introduced, in which coding aims
to achieve bit-rate optimisation and adaptation of surveillance videos for storing and
transmission purposes. In the proposed approach the SCC encoder communicates
with the Video Content Analysis (VCA) module that detects events of interest in
video captured by the CCTV. Bit-rate optimisation and adaptation are achieved by
exploiting the scalability properties of the employed codec. Time segments
containing events relevant to surveillance application are encoded using high spatiotemporal
resolution and quality while the irrelevant portions from the surveillance
standpoint are encoded at low spatio-temporal resolution and / or quality. Thanks to
the scalability of the resulting compressed bit-stream, additional bit-rate adaptation is
possible; for instance for the transmission purposes. Experimental evaluation showed
that significant reduction in bit-rate can be achieved by the proposed approach
without loss of information relevant to surveillance applications.
In addition to more optimal compression strategy, novel approaches to performing
efficient motion estimation specific to surveillance videos are proposed and
implemented with experimental results. A real-time background subtractor is used to
detect the presence of any motion activity in the sequence. Different approaches for
selective motion estimation, GOP based, Frame based and Block based, are
implemented. In the former, motion estimation is performed for the whole group of
pictures (GOP) only when a moving object is detected for any frame of the GOP.
iii
While for the Frame based approach; each frame is tested for the motion activity and
consequently for selective motion estimation. The selective motion estimation
approach is further explored at a lower level as Block based selective motion
estimation. Experimental evaluation showed that significant reduction in
computational complexity can be achieved by applying the proposed strategy. In
addition to selective motion estimation, a tracker based motion estimation and fast
full search using multiple reference frames has been proposed for the surveillance
videos.
Extensive testing on different surveillance videos shows benefits of
application of proposed approaches to achieve the goals of the SCC
Advanced heterogeneous video transcoding
PhDVideo transcoding is an essential tool to promote inter-operability
between different video communication systems. This thesis presents
two novel video transcoders, both operating on bitstreams of the cur-
rent H.264/AVC standard. The first transcoder converts H.264/AVC
bitstreams to a Wavelet Scalable Video Codec (W-SVC), while the second targets the emerging High Efficiency Video Coding (HEVC).
Scalable Video Coding (SVC) enables low complexity adaptation
of compressed video, providing an efficient solution for content delivery
through heterogeneous networks. The transcoder proposed here aims at
exploiting the advantages offered by SVC technology when dealing with
conventional coders and legacy video, efficiently reusing information
found in the H.264/AVC bitstream to achieve a high rate-distortion
performance at a low complexity cost. Its main features include new
mode mapping algorithms that exploit the W-SVC larger macroblock
sizes, and a new state-of-the-art motion vector composition algorithm
that is able to tackle different coding configurations in the H.264/AVC
bitstream, including IPP or IBBP with multiple reference frames.
The emerging video coding standard, HEVC, is currently approaching the final stage of development prior to standardization. This thesis
proposes and evaluates several transcoding algorithms for the HEVC
codec. In particular, a transcoder based on a new method that is capable of complexity scalability, trading off rate-distortion performance
for complexity reduction, is proposed. Furthermore, other transcoding solutions are explored, based on a novel content-based modeling
approach, in which the transcoder adapts its parameters based on the
contents of the sequence being encoded.
Finally, the application of this research is not constrained to these
transcoders, as many of the techniques developed aim to contribute
to advance the research on this field, and have the potential to be
incorporated in different video transcoding architectures
GRACE: Loss-Resilient Real-Time Video through Neural Codecs
In real-time video communication, retransmitting lost packets over
high-latency networks is not viable due to strict latency requirements. To
counter packet losses without retransmission, two primary strategies are
employed -- encoder-based forward error correction (FEC) and decoder-based
error concealment. The former encodes data with redundancy before transmission,
yet determining the optimal redundancy level in advance proves challenging. The
latter reconstructs video from partially received frames, but dividing a frame
into independently coded partitions inherently compromises compression
efficiency, and the lost information cannot be effectively recovered by the
decoder without adapting the encoder.
We present a loss-resilient real-time video system called GRACE, which
preserves the user's quality of experience (QoE) across a wide range of packet
losses through a new neural video codec. Central to GRACE's enhanced loss
resilience is its joint training of the neural encoder and decoder under a
spectrum of simulated packet losses. In lossless scenarios, GRACE achieves
video quality on par with conventional codecs (e.g., H.265). As the loss rate
escalates, GRACE exhibits a more graceful, less pronounced decline in quality,
consistently outperforming other loss-resilient schemes. Through extensive
evaluation on various videos and real network traces, we demonstrate that GRACE
reduces undecodable frames by 95% and stall duration by 90% compared with FEC,
while markedly boosting video quality over error concealment methods. In a user
study with 240 crowdsourced participants and 960 subjective ratings, GRACE
registers a 38% higher mean opinion score (MOS) than other baselines
- …