22 research outputs found
Error resilience and concealment techniques for high-efficiency video coding
This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods
3D coding tools final report
Livrable D4.3 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.3 du projet. Son titre : 3D coding tools final repor
High-Level Synthesis Based VLSI Architectures for Video Coding
High Efficiency Video Coding (HEVC) is state-of-the-art video coding standard. Emerging applications like free-viewpoint video, 360degree video, augmented reality, 3D movies etc. require standardized extensions of HEVC. The standardized extensions of HEVC include HEVC Scalable Video Coding (SHVC), HEVC Multiview Video Coding (MV-HEVC), MV-HEVC+ Depth (3D-HEVC) and HEVC Screen Content Coding. 3D-HEVC is used for applications like view synthesis generation, free-viewpoint video. Coding and transmission of depth maps in 3D-HEVC is used for the virtual view synthesis by the algorithms like Depth Image Based Rendering (DIBR). As first step, we performed the profiling of the 3D-HEVC standard. Computational intensive parts of the standard are identified for the efficient hardware implementation. One of the computational intensive part of the 3D-HEVC, HEVC and H.264/AVC is the Interpolation Filtering used for Fractional Motion Estimation (FME). The hardware implementation of the interpolation filtering is carried out using High-Level Synthesis (HLS) tools. Xilinx Vivado Design Suite is used for the HLS implementation of the interpolation filters of HEVC and H.264/AVC. The complexity of the digital systems is greatly increased. High-Level Synthesis is the methodology which offers great benefits such as late architectural or functional changes without time consuming in rewriting of RTL-code, algorithms can be tested and evaluated early in the design cycle and development of accurate models against which the final hardware can be verified
Dense light field coding: a survey
Light Field (LF) imaging is a promising solution for providing more immersive and closer to reality multimedia experiences to end-users with unprecedented creative freedom and flexibility for applications in different areas, such as virtual and augmented reality. Due to the recent technological advances in optics, sensor manufacturing and available transmission bandwidth, as well as the investment of many tech giants in this area, it is expected that soon many LF transmission systems will be available to both consumers and professionals. Recognizing this, novel standardization initiatives have recently emerged in both the Joint Photographic Experts Group (JPEG) and the Moving Picture Experts Group (MPEG), triggering the discussion on the deployment of LF coding solutions to efficiently handle the massive amount of data involved in such systems.
Since then, the topic of LF content coding has become a booming research area, attracting the attention of many researchers worldwide. In this context, this paper provides a comprehensive survey of the most relevant LF coding solutions proposed in the literature, focusing on angularly dense LFs. Special attention is placed on a thorough description of the different LF coding methods and on the main concepts related to this relevant area. Moreover, comprehensive insights are presented into open research challenges and future research directions for LF coding.info:eu-repo/semantics/publishedVersio
Algoritmo de estimação de movimento e sua arquitetura de hardware para HEVC
Doutoramento em Engenharia EletrotécnicaVideo coding has been used in applications like video surveillance, video
conferencing, video streaming, video broadcasting and video storage. In a
typical video coding standard, many algorithms are combined to compress a
video. However, one of those algorithms, the motion estimation is the most
complex task. Hence, it is necessary to implement this task in real time by
using appropriate VLSI architectures. This thesis proposes a new fast motion
estimation algorithm and its implementation in real time. The results show that
the proposed algorithm and its motion estimation hardware architecture out
performs the state of the art. The proposed architecture operates at a
maximum operating frequency of 241.6 MHz and is able to process
1080p@60Hz with all possible variables block sizes specified in HEVC
standard as well as with motion vector search range of up to ±64 pixels.A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância,
vídeo-conferência, video streaming e armazenamento de vídeo.
Numa norma de codificação de vídeo, diversos algoritmos são combinados
para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de
movimento é a tarefa mais complexa. Por isso, é necessário implementar esta
tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese
propõe um algoritmo de estimação de movimento rápido bem como a sua
implementação em tempo real. Os resultados mostram que o algoritmo e a
arquitetura de hardware propostos têm melhor desempenho que os existentes.
A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é
capaz de processar imagens de resolução 1080p@60Hz, com todos os
tamanhos de blocos especificados na norma HEVC, bem como um domínio de
pesquisa de vetores de movimento até ±64 pixels
Recommended from our members
Error control strategies in H.265|HEVC video transmission
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonWith the rapid development in video coding technologies in the last decade, high-resolution video delivery suffers from packet loss due to unreliable transmission channels (time-varying characteristics). The error Resilience approaches at channel coding level are less efficient to implement in real time video transmission as the encoded video samples are in variable code length. Therefore, error resilience in video coding standard plays a vital role to reduce the effect of error propagation and improve the perceived visual quality. The main work in this thesis is to develop an efficient error resilience mechanism for H.265|HEVC video coding standard to reduce the effects of error propagation in error-prone conditions. In this thesis, two error resilience algorithms are proposed. The first one is Adaptive Slice Encoding (ASE) error resilience algorithm. The concept of this algorithm is to extract and protect the most active slices in the coded bitstream based on the adaptive search window. This algorithm can be applied in low delay video transmission with and without using a feedback channel. It is also designed to be compatible with reference coding software manual (HM16) for H.265|HEVC coding standard. The second proposed algorithm is a joint encoder-decoder error resilience called Error resilience based on Supplemental Enhancement Information (ERSEI) algorithm. A feedback message status is used from the decoder to notify the encoder to start encoding clean random-access picture adaptively based on the decoded picture hash message status from the decoder. At the same time, the decoder will be notified to start the error concealment process whilst waiting to receive correct video data. A recovery point message from the decoder feedback channel is used to update the encoder with error messages.
In this thesis, extensive experimental work, evaluation, and comparison with state-of-the-art related algorithms have been conducted to evaluate the proposed algorithms. Furthermore, the best trade-off between the coding efficiency of the proposed error resilience algorithms and error resilience performance has been considered at the design stage. The experimental work evaluation includes both encoding conditions, i.e. error-free and error-prone. The results achieved from the experiments show significant improvements, in (Y-PSNR) results and subjective quality of the decoded bitstream, using the proposed algorithm in error-prone conditions with a variety of packet loss rates.
Moreover, experimental work is conducted to test the algorithms complexity in terms of required processing execution time at both encoding and decoding stages. Additionally, the video coding standard performance for both H.264|AVC and H.265|HEVC coding standards are evaluated in error-free and error-prone environments.
For ASE algorithm and when compared with improved region of interest (IROI) and region of interest (ROI) algorithms, a significant improvement in visual quality was the most obvious finding from the obtained results with PLRs of 2-18 (%).
For ERSEI algorithm and when compared with the default HM16 with pixel copy concealment and motion compensated error concealment (MCEC) techniques, the evaluation results indicate clear visual quality enhancement under different packet loss rates PLRs (1,2 6, 8) %.The Ministry of Higher Education and Scientific Research in Ira
A two-stage approach for robust HEVC coding and streaming
The increased compression ratios achieved by the High Efficiency Video Coding (HEVC) standard lead to reduced
robustness of coded streams, with increased susceptibility to
network errors and consequent video quality degradation. This
paper proposes a method based on a two-stage approach to
improve the error robustness of HEVC streaming, by reducing
temporal error propagation in case of frame loss. The prediction mismatch that occurs at the decoder after frame loss is reduced through the following two stages: (i) at the encoding stage, the reference pictures are dynamically selected based on constraining conditions and Lagrangian optimisation, which distributes the use of reference pictures, by reducing the number of prediction units (PUs) that depend on a single reference; (ii) at the streaming stage, a motion vector (MV) prioritisation algorithm, based on spatial dependencies, selects an optimal sub-set of MVs to be transmitted, redundantly, as side information to reduce mismatched MV predictions at the decoder. The simulation results show that the proposed method significantly reduces the effect of temporal error propagation. Compared to the reference HEVC, the proposed reference picture selection method is able to improve the video quality at low packet loss rates (e.g., 1%) using
the same bitrate, achieving quality gains up to 2.3 dB for 10%
of packet loss ratio. It is shown, for instance, that the redundant MVs are able to boost the performance achieving quality gains of 3 dB when compared to the reference HEVC, at the cost using 4% increase in total bitrate
Analysis and Comparison of Modern Video Compression Standards for Random-access Light-field Compression
Light-field (LF) 3D displays are anticipated to be the next-generation 3D displays by providing smooth motion parallax, wide field of view (FOV), and higher depth range than the current autostereoscopic displays. The projection-based multi-view LF 3D displays bring the desired new functionalities through a set of projection engines creating light sources for the continuous light field to be created. Such displays require a high number of perspective views as an input to fully exploit the visualization capabilities and viewing angle provided by the LF technology. Delivering, processing and de/compressing this amount of views pose big technical challenges. However, when processing light fields in a distributed system, access patterns in ray space are quite regular, some processing nodes do not need all views, moreover the necessary views are used only partially. This trait could be exploited by partial decoding of pictures to help providing less complex and thus real-time operation.
However, none of the recent video coding standards (e.g., Advanced Video Coding (AVC)/H.264 and High Efficiency Video Coding (HEVC)/H.265 standards) provides partial decoding of video pictures. Such feature can be achieved by partitioning video pictures into partitions that can be processed independently at the cost of lowering the compression efficiency. Examples of such partitioning features introduced by the modern video coding standards include slices and tiles, which enable random access into the video bitstreams with a specific granularity. In addition, some extra requirements have to be imposed on the standard partitioning tools in order to be applicable in the context of partial decoding. This leads to partitions called self-contained which refers to isolated or independently decodable regions in the video pictures.
This work studies the problem of creating self-contained partitions in the conventional AVC/H.264 and HEVC/H.265 standards, and HEVC 3D extensions including multi-view (i.e., MV-HEVC) and 3D (i.e., 3D-HEVC) extensions using slices and tiles, respectively. The requirements that need to be fulfilled in order to build self-contained partitions are described, and an encoder-side solution is proposed. Further, the work examines how slicing/tiling can be used to facilitate random access into the video bitstreams, how the number of slices/tiles affects the compression ratio considering different prediction structures, and how much effect partial decoding has on decoding time.
Overall, the experimental results indicate that the finer the partitioning is, the higher the compression loss occurs. The usage of self-contained partitions makes the decoding operation very efficient and less complex
High-Level Synthesis Implementation of HEVC Intra Encoder
High Efficiency Video Coding (HEVC) is the latest video coding standard that aims to alleviate the increasing transmission and storage needs of modern video applications. Compared with its predecessor, HEVC is able to halve the bit rate required for high quality video, but at the cost of increased complexity. High complexity makes HEVC video encoding slow and resource intensive but also ideal for hardware acceleration.
With increasingly more complex designs, the effort required for traditional hardware development at register-transfer level (RTL) grows substantially. High-Level Synthesis (HLS) aims to solve this by raising the abstraction level through automatic tools that generate RTL-level code from general programming languages like C or C++.
In this Thesis, we made use of Catapult-C HLS tool to create an intra coding accelerator for an HEVC encoder on a Field Programmable Gate Array (FPGA). We used the C source code of Kvazaar open-source HEVC encoder as a reference model for accelerator implementation. Over 90 % of the implementation including all major intra coding tools were implemented with HLS, with the rest being ready made IP blocks and hand-written RTL components.
The accelerator was synthesized into an Arria 10 FPGA chip that was able to accommodate three accelerators and associated interface components. With two FPGAs connected to a high-end PC, our encoder was able to encode 2160p Ultra-High definition (UHD) video at 123 fps. Total FPGA resource usage was around 80 % with 346k Adaptive logic modules (ALMs) and 1227 Digital signal processors (DSPs)