62 research outputs found

    Maximum-Entropy-Model-Enabled Complexity Reduction Algorithm in Modern Video Coding Standards

    Get PDF
    Symmetry considerations play a key role in modern science, and any differentiable symmetry of the action of a physical system has a corresponding conservation law. Symmetry may be regarded as reduction of Entropy. This work focuses on reducing the computational complexity of modern video coding standards by using the maximum entropy principle. The high computational complexity of the coding unit (CU) size decision in modern video coding standards is a critical challenge for real-time applications. This problem is solved in a novel approach considering CU termination, skip, and normal decisions as three-class making problems. The maximum entropy model (MEM) is formulated to the CU size decision problem, which can optimize the conditional entropy; the improved iterative scaling (IIS) algorithm is used to solve this optimization problem. The classification features consist of the spatio-temporal information of the CU, including the rate–distortion (RD) cost, coded block flag (CBF), and depth. For the case analysis, the proposed method is based on High Efficiency Video Coding (H.265/HEVC) standards. The experimental results demonstrate that the proposed method can reduce the computational complexity of the H.265/HEVC encoder significantly. Compared with the H.265/HEVC reference model, the proposed method can reduce the average encoding time by 53.27% and 56.36% under low delay and random access configurations, while Bjontegaard Delta Bit Rates (BD-BRs) are 0.72% and 0.93% on average

    Optimal coding unit decision for early termination in high efficiency video coding using enhanced whale optimization algorithm

    Get PDF
    Video compression is an emerging research topic in the field of block based video encoders. Due to the growth of video coding technologies, high efficiency video coding (HEVC) delivers superior coding performance. With the increased encoding complexity, the HEVC enhances the rate-distortion (RD) performance. In the video compression, the out-sized coding units (CUs) have higher encoding complexity. Therefore, the computational encoding cost and complexity remain vital concerns, which need to be considered as an optimization task. In this manuscript, an enhanced whale optimization algorithm (EWOA) is implemented to reduce the computational time and complexity of the HEVC. In the EWOA, a cosine function is incorporated with the controlling parameter A and two correlation factors are included in the WOA for controlling the position of whales and regulating the movement of search mechanism during the optimization and search processes. The bit streams in the Luma-coding tree block are selected using EWOA that defines the CU neighbors and is used in the HEVC. The results indicate that the EWOA achieves best bit rate (BR), time saving, and peak signal to noise ratio (PSNR). The EWOA showed 0.006-0.012 dB higher PSNR than the existing models in the real-time videos

    Video Stream Adaptation In Computer Vision Systems

    Get PDF
    Computer Vision (CV) has been deployed recently in a wide range of applications, including surveillance and automotive industries. According to a recent report, the market for CV technologies will grow to $33.3 billion by 2019. Surveillance and automotive industries share over 20% of this market. This dissertation considers the design of real-time CV systems with live video streaming, especially those over wireless and mobile networks. Such systems include video cameras/sensors and monitoring stations. The cameras should adapt their captured videos based on the events and/or available resources and time requirement. The monitoring station receives video streams from all cameras and run CV algorithms for decisions, warnings, control, and/or other actions. Real-time CV systems have constraints in power, computational, and communicational resources. Most video adaptation techniques considered the video distortion as the primary metric. In CV systems, however, the main objective is enhancing the event/object detection/recognition/tracking accuracy. The accuracy can essentially be thought of as the quality perceived by machines, as opposed to the human perceptual quality. High-Efficiency Video Coding (HEVC) is a recent encoding standard that seeks to address the limited communication bandwidth problem as a result of the popularity of High Definition (HD) videos. Unfortunately, HEVC adopts algorithms that greatly slow down the encoding process, and thus results in complications in real-time systems. This dissertation presents a method for adapting live video streams to limited and varying network bandwidth and energy resources. It analyzes and compares the rate-accuracy and rate-energy characteristics of various video streams adaptation techniques in CV systems. We model the video capturing, encoding, and transmission aspects and then provide an overall model of the power consumed by the video cameras and/or sensors. In addition to modeling the power consumption, we model the achieved bitrate of video encoding. We validate and analyze the power consumption models of each phase as well as the aggregate power consumption model through extensive experiments. The analysis includes examining individual parameters separately and examining the impacts of changing more than one parameter at a time. For HEVC, we develop an algorithm that predicts the size of the block without iterating through the exhaustive Rate Distortion Optimization (RDO) method. We demonstrate the effectiveness of the proposed algorithm in comparison with existing algorithms. The proposed algorithm achieves approximately 5 times the encoding speed of the RDO algorithm and 1.42 times the encoding speed of the fastest analyzed algorithm

    CTU Depth Decision Algorithms for HEVC: A Survey

    Get PDF
    High-Efficiency Video Coding (HEVC) surpasses its predecessors in encoding efficiency by introducing new coding tools at the cost of an increased encoding time-complexity. The Coding Tree Unit (CTU) is the main building block used in HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined size of up to 64x64 pixels. Each CTU is then divided recursively into a number of equally sized square areas, known as Coding Units (CUs). Although this diversity of frame partitioning increases encoding efficiency, it also causes an increase in the time complexity due to the increased number of ways to find the optimal partitioning. To address this complexity, numerous algorithms have been proposed to eliminate unnecessary searches during partitioning CTUs by exploiting the correlation in the video. In this paper, existing CTU depth decision algorithms for HEVC are surveyed. These algorithms are categorized into two groups, namely statistics and machine learning approaches. Statistics approaches are further subdivided into neighboring and inherent approaches. Neighboring approaches exploit the similarity between adjacent CTUs to limit the depth range of the current CTU, while inherent approaches use only the available information within the current CTU. Machine learning approaches try to extract and exploit similarities implicitly. Traditional methods like support vector machines or random forests use manually selected features, while recently proposed deep learning methods extract features during training. Finally, this paper discusses extending these methods to more recent video coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1(AV1)

    Efficient VVC Intra Prediction Based on Deep Feature Fusion and Probability Estimation

    Full text link
    The ever-growing multimedia traffic has underscored the importance of effective multimedia codecs. Among them, the up-to-date lossy video coding standard, Versatile Video Coding (VVC), has been attracting attentions of video coding community. However, the gain of VVC is achieved at the cost of significant encoding complexity, which brings the need to realize fast encoder with comparable Rate Distortion (RD) performance. In this paper, we propose to optimize the VVC complexity at intra-frame prediction, with a two-stage framework of deep feature fusion and probability estimation. At the first stage, we employ the deep convolutional network to extract the spatialtemporal neighboring coding features. Then we fuse all reference features obtained by different convolutional kernels to determine an optimal intra coding depth. At the second stage, we employ a probability-based model and the spatial-temporal coherence to select the candidate partition modes within the optimal coding depth. Finally, these selected depths and partitions are executed whilst unnecessary computations are excluded. Experimental results on standard database demonstrate the superiority of proposed method, especially for High Definition (HD) and Ultra-HD (UHD) video sequences.Comment: 10 pages, 10 figure

    Algoritmo de estimação de movimento e sua arquitetura de hardware para HEVC

    Get PDF
    Doutoramento em Engenharia EletrotécnicaVideo coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels

    Alogorithms for fast implementation of high efficiency video coding

    Get PDF
    Recently, there is higher demand for video content in multimedia communication, which leads to increased requirements for storage and bandwidth posed to internet service providers. Due to this, it became necessary for the telecommunication standardization sector of the International Telecommunication Union (ITU-T) to launch a new video compression standard that would address the twin challenges of lowering both digital file sizes in storage media and transmission bandwidths in networks. The High Efficiency Video Compression (HEVC) also known as H.265 standard was launched in November 2013 to address these challenges. This new standard was able to cut down, by 50%, on existing media file sizes and bandwidths but its computational complexity leads to about 400% delay in HEVC video encoding. This study proposes a solution to the above problem based on three key areas of the HEVC. Firstly, two fast motion estimation algorithms are proposed based on triangle and pentagon structures to implement motion estimation and compensation in a shorter time. Secondly, an enhanced and optimized inter-prediction mode selection is proposed. Thirdly, an enhanced intra-prediction mode scheme with reduced latency is suggested. Based on the test model of the HEVC reference software, each individual algorithm manages to reduce the encoding time across all video classes by an average of 20-30%, with a best reduction of 70%, at a negligible loss in coding efficiency and video quality degradation. In practice, these algorithms would be able to enhance the performance of the HEVC compression standard, and enable higher resolution and higher frame rate video encoding as compared to the stateof- the-art technique

    Algorithms for complexity management in video coding

    Get PDF
    Nowadays, the applications based on video services are becoming very popular, e.g., the transmission of video sequences over the Internet or mobile networks, or the increasingly common use of the High Definition (HD) video signals in television or Blu-Ray systems. Thanks to this popularity of video services, video coding has become an essential tool to send and store digital video sequences. The standardization organizations have developed several video coding standards, being the most recent H.264/AVC and HEVC. Both standards achieve great results compressing the video signal by virtue of a set of spatio-temporal predictive techniques. Nevertheless, the efficacy of these techniques comes in exchange for a high increase in the computational cost of the video coding process. Due to the high complexity of these standards, a variety of algorithms attempting to control the computational burden of video coding have been developed. The goal of these algorithms is to control the coder complexity, using a specific amount of coding resources while keeping the coding efficiency as high as possible. In this PhD Thesis, we propose two algorithms devoted to control the complexity of the H.264/AVC and HEVC standards. Relying on the statistical properties of the video sequences, we will demonstrate that the developed methods are able to control the computational burden avoiding relevant losses in coding efficiency. Moreover, our proposals are designed to adapt their behavior according to the video content, as well as to different target complexities. The proposed methods have been thoroughly tested and compared with other state-of-the-art proposals for a variety of video resolutions, video sequences and coding configurations. The obtained results proved that our methods outperform other approaches and revealed that they are suitable for practical implementations of coding standards, where the computational complexity becomes a key feature for a proper design of the system.En la actualidad, la popularidad de las aplicaciones basadas en servicios de vídeo, como su transmisión sobre Internet o redes móviles, o el uso de la alta definición (HD) en sistemas de televisión o Blu-Ray, ha hecho que la codificación de vídeo se haya convertido en una herramienta imprescindible para poder transmitir y almacenar eficientemente secuencias de vídeo digitalizadas. Los organismos de estandarización han desarrollado diversos estándares de codificación de vídeo, siendo los más recientes H.264/AVC y HEVC. Ambos consiguen excelentes resultados a la hora de comprimir señales de vídeo, gracias a una serie de técnicas predictivas espacio-temporales. Sin embargo, la eficacia de estas técnicas tiene como contrapartida un considerable aumento en el coste computacional del proceso de codificación. Debido a la alta complejidad de estos estándares, se han desarrollado una gran cantidad de métodos para controlar el coste computacional del proceso de codificación. El objetivo de estos métodos es controlar la complejidad del codificador, utilizando para ello una cantidad de recursos específica mientras procuran maximizar la eficiencia del sistema. En esta Tesis, se proponen dos algoritmos dedicados a controlar la complejidad de los estándares H.264/AVC y HEVC. Apoyándose en las propiedades estadísticas de las secuencias de vídeo, demostraremos que los métodos desarrollados son capaces de controlar la complejidad sin incurrir en graves pérdidas de eficiencia de codificación. Además, nuestras propuestas se han diseñado para adaptar su funcionamiento al contenido de la secuencia de vídeo, así como a diferentes complejidades objetivo. Los métodos propuestos han sido ampliamente evaluados y comparados con otros sistemas del estado de la técnica, utilizando para ello una gran variedad de secuencias, resoluciones, y configuraciones de codificación, demostrando que alcanzan resultados superiores a los métodos con los que se han comparado. Adicionalmente, se ha puesto de manifiesto que resultan adecuados para implementaciones prácticas de los estándares de codificación, donde la complejidad computacional es un parámetro clave para el correcto diseño del sistema.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Fernando Jaureguizar Núñez.- Secretario: Iván González Díaz.- Vocal: Javier Ruiz Hidalg

    SVM based approach for complexity control of HEVC intra coding

    Get PDF
    The High Efficiency Video Coding (HEVC) is adopted by various video applications in recent years. Because of its high computational demand, controlling the complexity of HEVC is of paramount importance to appeal to the varying requirements in many applications, including power-constrained video coding, video streaming, and cloud gaming. Most of the existing complexity control methods are only capable of considering a subset of the decision space, which leads to low coding efficiency. While the efficiency of machine learning methods such as Support Vector Machines (SVM) can be employed for higher precision decision making, the current SVM-based techniques for HEVC provide a fixed decision boundary which results in different coding complexities for different video content. Although this might be suitable for complexity reduction, it is not acceptable for complexity control. This paper proposes an adjustable classification approach for Coding Unit (CU) partitioning, which addresses the mentioned problems of complexity control. Firstly, a novel set of features for fast CU partitioning is designed using image processing techniques. Then, a flexible classification method based on SVM is proposed to model the CU partitioning problem. This approach allows adjusting the performance-complexity trade-off, even after the training phase. Using this model, and a novel adaptive thresholding technique, an algorithm is presented to deliver video encoding within the target coding complexity, while maximizing the coding efficiency. Experimental results justify the superiority of this method over the state-of-the-art methods, with target complexities ranging from 20% to 100%.acceptedVersionPeer reviewe
    corecore