5,292 research outputs found

    A two-stage video coding framework with both self-adaptive redundant dictionary and adaptively orthonormalized DCT basis

    Full text link
    In this work, we propose a two-stage video coding framework, as an extension of our previous one-stage framework in [1]. The two-stage frameworks consists two different dictionaries. Specifically, the first stage directly finds the sparse representation of a block with a self-adaptive dictionary consisting of all possible inter-prediction candidates by solving an L0-norm minimization problem using an improved orthogonal matching pursuit with embedded orthonormalization (eOMP) algorithm, and the second stage codes the residual using DCT dictionary adaptively orthonormalized to the subspace spanned by the first stage atoms. The transition of the first stage and the second stage is determined based on both stages' quantization stepsizes and a threshold. We further propose a complete context adaptive entropy coder to efficiently code the locations and the coefficients of chosen first stage atoms. Simulation results show that the proposed coder significantly improves the RD performance over our previous one-stage coder. More importantly, the two-stage coder, using a fixed block size and inter-prediction only, outperforms the H.264 coder (x264) and is competitive with the HEVC reference coder (HM) over a large rate range

    Motion estimation and CABAC VLSI co-processors for real-time high-quality H.264/AVC video coding

    Get PDF
    Real-time and high-quality video coding is gaining a wide interest in the research and industrial community for different applications. H.264/AVC, a recent standard for high performance video coding, can be successfully exploited in several scenarios including digital video broadcasting, high-definition TV and DVD-based systems, which require to sustain up to tens of Mbits/s. To that purpose this paper proposes optimized architectures for H.264/AVC most critical tasks, Motion estimation and context adaptive binary arithmetic coding. Post synthesis results on sub-micron CMOS standard-cells technologies show that the proposed architectures can actually process in real-time 720 Ă— 480 video sequences at 30 frames/s and grant more than 50 Mbits/s. The achieved circuit complexity and power consumption budgets are suitable for their integration in complex VLSI multimedia systems based either on AHB bus centric on-chip communication system or on novel Network-on-Chip (NoC) infrastructures for MPSoC (Multi-Processor System on Chip

    Maximum-Entropy-Model-Enabled Complexity Reduction Algorithm in Modern Video Coding Standards

    Get PDF
    Symmetry considerations play a key role in modern science, and any differentiable symmetry of the action of a physical system has a corresponding conservation law. Symmetry may be regarded as reduction of Entropy. This work focuses on reducing the computational complexity of modern video coding standards by using the maximum entropy principle. The high computational complexity of the coding unit (CU) size decision in modern video coding standards is a critical challenge for real-time applications. This problem is solved in a novel approach considering CU termination, skip, and normal decisions as three-class making problems. The maximum entropy model (MEM) is formulated to the CU size decision problem, which can optimize the conditional entropy; the improved iterative scaling (IIS) algorithm is used to solve this optimization problem. The classification features consist of the spatio-temporal information of the CU, including the rate–distortion (RD) cost, coded block flag (CBF), and depth. For the case analysis, the proposed method is based on High Efficiency Video Coding (H.265/HEVC) standards. The experimental results demonstrate that the proposed method can reduce the computational complexity of the H.265/HEVC encoder significantly. Compared with the H.265/HEVC reference model, the proposed method can reduce the average encoding time by 53.27% and 56.36% under low delay and random access configurations, while Bjontegaard Delta Bit Rates (BD-BRs) are 0.72% and 0.93% on average

    Video Stream Adaptation In Computer Vision Systems

    Get PDF
    Computer Vision (CV) has been deployed recently in a wide range of applications, including surveillance and automotive industries. According to a recent report, the market for CV technologies will grow to $33.3 billion by 2019. Surveillance and automotive industries share over 20% of this market. This dissertation considers the design of real-time CV systems with live video streaming, especially those over wireless and mobile networks. Such systems include video cameras/sensors and monitoring stations. The cameras should adapt their captured videos based on the events and/or available resources and time requirement. The monitoring station receives video streams from all cameras and run CV algorithms for decisions, warnings, control, and/or other actions. Real-time CV systems have constraints in power, computational, and communicational resources. Most video adaptation techniques considered the video distortion as the primary metric. In CV systems, however, the main objective is enhancing the event/object detection/recognition/tracking accuracy. The accuracy can essentially be thought of as the quality perceived by machines, as opposed to the human perceptual quality. High-Efficiency Video Coding (HEVC) is a recent encoding standard that seeks to address the limited communication bandwidth problem as a result of the popularity of High Definition (HD) videos. Unfortunately, HEVC adopts algorithms that greatly slow down the encoding process, and thus results in complications in real-time systems. This dissertation presents a method for adapting live video streams to limited and varying network bandwidth and energy resources. It analyzes and compares the rate-accuracy and rate-energy characteristics of various video streams adaptation techniques in CV systems. We model the video capturing, encoding, and transmission aspects and then provide an overall model of the power consumed by the video cameras and/or sensors. In addition to modeling the power consumption, we model the achieved bitrate of video encoding. We validate and analyze the power consumption models of each phase as well as the aggregate power consumption model through extensive experiments. The analysis includes examining individual parameters separately and examining the impacts of changing more than one parameter at a time. For HEVC, we develop an algorithm that predicts the size of the block without iterating through the exhaustive Rate Distortion Optimization (RDO) method. We demonstrate the effectiveness of the proposed algorithm in comparison with existing algorithms. The proposed algorithm achieves approximately 5 times the encoding speed of the RDO algorithm and 1.42 times the encoding speed of the fastest analyzed algorithm

    High Performance Multiview Video Coding

    Get PDF
    Following the standardization of the latest video coding standard High Efficiency Video Coding in 2013, in 2014, multiview extension of HEVC (MV-HEVC) was published and brought significantly better compression performance of around 50% for multiview and 3D videos compared to multiple independent single-view HEVC coding. However, the extremely high computational complexity of MV-HEVC demands significant optimization of the encoder. To tackle this problem, this work investigates the possibilities of using modern parallel computing platforms and tools such as single-instruction-multiple-data (SIMD) instructions, multi-core CPU, massively parallel GPU, and computer cluster to significantly enhance the MVC encoder performance. The aforementioned computing tools have very different computing characteristics and misuse of the tools may result in poor performance improvement and sometimes even reduction. To achieve the best possible encoding performance from modern computing tools, different levels of parallelism inside a typical MVC encoder are identified and analyzed. Novel optimization techniques at various levels of abstraction are proposed, non-aggregation massively parallel motion estimation (ME) and disparity estimation (DE) in prediction unit (PU), fractional and bi-directional ME/DE acceleration through SIMD, quantization parameter (QP)-based early termination for coding tree unit (CTU), optimized resource-scheduled wave-front parallel processing for CTU, and workload balanced, cluster-based multiple-view parallel are proposed. The result shows proposed parallel optimization techniques, with insignificant loss to coding efficiency, significantly improves the execution time performance. This , in turn, proves modern parallel computing platforms, with appropriate platform-specific algorithm design, are valuable tools for improving the performance of computationally intensive applications

    A Bayesian Approach to Block Structure Inference in AV1-based Multi-rate Video Encoding

    Full text link
    Due to differences in frame structure, existing multi-rate video encoding algorithms cannot be directly adapted to encoders utilizing special reference frames such as AV1 without introducing substantial rate-distortion loss. To tackle this problem, we propose a novel bayesian block structure inference model inspired by a modification to an HEVC-based algorithm. It estimates the posterior probabilistic distributions of block partitioning, and adapts early terminations in the RDO procedure accordingly. Experimental results show that the proposed method provides flexibility for controlling the tradeoff between speed and coding efficiency, and can achieve an average time saving of 36.1% (up to 50.6%) with negligible bitrate cost.Comment: published in IEEE Data Compression Conference, 201

    Rate-distortion and complexity optimized motion estimation for H.264 video coding

    Get PDF
    11.264 video coding standard supports several inter-prediction coding modes that use macroblock (MB) partitions with variable block sizes. Rate-distortion (R-D) optimal selection of both the motion vectors (MVs) and the coding mode of each MB is essential for an H.264 encoder to achieve superior coding efficiency. Unfortunately, searching for optimal MVs of each possible subblock incurs a heavy computational cost. In this paper, in order to reduce the computational burden of integer-pel motion estimation (ME) without sacrificing from the coding performance, we propose a R-D and complexity joint optimization framework. Within this framework, we develop a simple method that determines for each MB which partitions are likely to be optimal. MV search is carried out for only the selected partitions, thus reducing the complexity of the ME step. The mode selection criteria is based on a measure of spatiotemporal activity within the MB. The procedure minimizes the coding loss at a given level of computational complexity either for the full video sequence or for each single frame. For the latter case, the algorithm provides a tight upper bound on the worst case complexity/execution time of the ME module. Simulation results show that the algorithm speeds up integer-pel ME by a factor of up to 40 with less than 0.2 dB loss in coding efficiency.Publisher's Versio

    Quality of Experience (QoE)-Aware Fast Coding Unit Size Selection for HEVC Intra-prediction

    Get PDF
    The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches

    Spatial Prediction in the H.264/AVC FRExt Coder and its Optimization

    Get PDF
    The chapter presents a review of the fast spatial prediction strategy that were designed for the Intra coding mode of the video coding standard H.264/AVC. At the end, the author presents an effective strategy based on belief propagation message passing

    Bayesian adaptive algorithm for fast coding unit decision in the High Efficiency Video Coding (HEVC) standard

    Get PDF
    The latest High Efficiency Video Coding standard (HEVC) provides a set of new coding tools to achieve a significantly higher coding efficiency than previous standards. In this standard, the pixels are first grouped into Coding Units (CU), then Prediction Units (PU), and finally Transform Units (TU). All these coding levels are organized into a quadtree-shaped arrangement that allows highly flexible data representation; however, they involve a very high computational complexity. In this paper, we propose an effective early CU depth decision algorithm to reduce the encoder complexity. Our proposal is based on a hierarchical approach, in which a hypothesis test is designed to make a decision at every CU depth, where the algorithm either produces an early termination or decides to evaluate the subsequent depth level. Moreover, the proposed method is able to adaptively estimate the parameters that define each hypothesis test, so that it adapts its behavior to the variable contents of the video sequences. The proposed method has been extensively tested, and the experimental results show that our proposal outperforms several state-of-the-art methods, achieving a significant reduction of the computational complexity (36.5% and 38.2% average reductions in coding time for two different encoder configurations) in exchange for very slight losses in coding performance (1.7% and 0.8% average bit rate increments).This work has been partially supported by the National Grant TEC2014-53390-P of the Spanish Ministry of Economy and Competitiveness
    • …
    corecore