41,464 research outputs found

    Lossless Intra Coding in HEVC with 3-tap Filters

    Full text link
    This paper presents a pixel-by-pixel spatial prediction method for lossless intra coding within High Efficiency Video Coding (HEVC). A well-known previous pixel-by-pixel spatial prediction method uses only two neighboring pixels for prediction, based on the angular projection idea borrowed from block-based intra prediction in lossy coding. This paper explores a method which uses three neighboring pixels for prediction according to a two-dimensional correlation model, and the used neighbor pixels and prediction weights change depending on intra mode. To find the best prediction weights for each intra mode, a two-stage offline optimization algorithm is used and a number of implementation aspects are discussed to simplify the proposed prediction method. The proposed method is implemented in the HEVC reference software and experimental results show that the explored 3-tap filtering method can achieve an average 11.34% bitrate reduction over the default lossless intra coding in HEVC. The proposed method also decreases average decoding time by 12.7% while it increases average encoding time by 9.7%Comment: 10 pages, 7 figure

    Fast intra prediction in the transform domain

    Get PDF
    In this paper, we present a fast intra prediction method based on separating the transformed coefficients. The prediction block can be obtained from the transformed and quantized neighboring block generating minimum distortion for each DC and AC coefficients independently. Two prediction methods are proposed, one is full block search prediction (FBSP) and the other is edge based distance prediction (EBDP), that find the best matched transformed coefficients on additional neighboring blocks. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC

    A two-stage video coding framework with both self-adaptive redundant dictionary and adaptively orthonormalized DCT basis

    Full text link
    In this work, we propose a two-stage video coding framework, as an extension of our previous one-stage framework in [1]. The two-stage frameworks consists two different dictionaries. Specifically, the first stage directly finds the sparse representation of a block with a self-adaptive dictionary consisting of all possible inter-prediction candidates by solving an L0-norm minimization problem using an improved orthogonal matching pursuit with embedded orthonormalization (eOMP) algorithm, and the second stage codes the residual using DCT dictionary adaptively orthonormalized to the subspace spanned by the first stage atoms. The transition of the first stage and the second stage is determined based on both stages' quantization stepsizes and a threshold. We further propose a complete context adaptive entropy coder to efficiently code the locations and the coefficients of chosen first stage atoms. Simulation results show that the proposed coder significantly improves the RD performance over our previous one-stage coder. More importantly, the two-stage coder, using a fixed block size and inter-prediction only, outperforms the H.264 coder (x264) and is competitive with the HEVC reference coder (HM) over a large rate range
    corecore