1,145 research outputs found

    Variable Block Size Motion Compensation In The Redundant Wavelet Domain

    Get PDF
    Video is one of the most powerful forms of multimedia because of the extensive information it delivers. Video sequences are highly correlated both temporally and spatially, a fact which makes the compression of video possible. Modern video systems employ motion estimation and motion compensation (ME/MC) to de-correlate a video sequence temporally. ME/MC forms a prediction of the current frame using the frames which have been already encoded. Consequently, one needs to transmit the corresponding residual image instead of the original frame, as well as a set of motion vectors which describe the scene motion as observed at the encoder. The redundant wavelet transform (RDWT) provides several advantages over the conventional wavelet transform (DWT). The RDWT overcomes the shift invariant problem in DWT. Moreover, RDWT retains all the phase information of wavelet coefficients and provides multiple prediction possibilities for ME/MC in wavelet domain. The general idea of variable size block motion compensation (VSBMC) technique is to partition a frame in such a way that regions with uniform translational motions are divided into larger blocks while those containing complicated motions into smaller blocks, leading to an adaptive distribution of motion vectors (MV) across the frame. The research proposed new adaptive partitioning schemes and decision criteria in RDWT that utilize more effectively the motion content of a frame in terms of various block sizes. The research also proposed a selective subpixel accuracy algorithm for the motion vector using a multiband approach. The selective subpixel accuracy reduces the computations produced by the conventional subpixel algorithm while maintaining the same accuracy. In addition, the method of overlapped block motion compensation (OBMC) is used to reduce blocking artifacts. Finally, the research extends the applications of the proposed VSBMC to the 3D video sequences. The experimental results obtained here have shown that VSBMC in the RDWT domain can be a powerful tool for video compression

    Improved Method to Select the Lagrange Multiplier for Rate-Distortion Based Motion Estimation in Video Coding

    Get PDF
    The motion estimation (ME) process used in the H.264/AVC reference software is based on minimizing a cost function that involves two terms (distortion and rate) that are properly balanced through a Lagrangian parameter, usually denoted as lambda(motion). In this paper we propose an algorithm to improve the conventional way of estimating lambda(motion) and, consequently, the ME process. First, we show that the conventional estimation of lambda(motion) turns out to be significantly less accurate when ME-compromising events, which make the ME process to perform poorly, happen. Second, with the aim of improving the coding efficiency in these cases, an efficient algorithm is proposed that allows the encoder to choose between three different values of lambda(motion) for the Inter 16x16 partition size. To be more precise, for this partition size, the proposed algorithm allows the encoder to additionally test lambda(motion) = 0 and lambda(motion) arbitrarily large, which corresponds to minimum distortion and minimum rate solutions, respectively. By testing these two extreme values, the algorithm avoids making large ME errors. The experimental results on video segments exhibiting this type of ME-compromising events reveal an average rate reduction of 2.20% for the same coding quality with respect to the JM15.1 reference software of H.264/AVC. The algorithm has been also tested in comparison with a state-of-the-art algorithm called context adaptive Lagrange multiplier. Additionally, two illustrative examples of the subjective performance improvement are provided.This work has been partially supported by the National Grant TEC2011-26807 of the Spanish Ministry of Science and Innovation.Publicad

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Motion estimation and signaling techniques for 2D+t scalable video coding

    Get PDF
    We describe a fully scalable wavelet-based 2D+t (in-band) video coding architecture. We propose new coding tools specifically designed for this framework aimed at two goals: reduce the computational complexity at the encoder without sacrificing compression; improve the coding efficiency, especially at low bitrates. To this end, we focus our attention on motion estimation and motion vector encoding. We propose a fast motion estimation algorithm that works in the wavelet domain and exploits the geometrical properties of the wavelet subbands. We show that the computational complexity grows linearly with the size of the search window, yet approaching the performance of a full search strategy. We extend the proposed motion estimation algorithm to work with blocks of variable sizes, in order to better capture local motion characteristics, thus improving in terms of rate-distortion behavior. Given this motion field representation, we propose a motion vector coding algorithm that allows to adaptively scale the motion bit budget according to the target bitrate, improving the coding efficiency at low bitrates. Finally, we show how to optimally scale the motion field when the sequence is decoded at reduced spatial resolution. Experimental results illustrate the advantages of each individual coding tool presented in this paper. Based on these simulations, we define the best configuration of coding parameters and we compare the proposed codec with MC-EZBC, a widely used reference codec implementing the t+2D framework

    Methods for Improving the Tone Mapping for Backward Compatible High Dynamic Range Image and Video Coding

    Get PDF
    International audienceBackward compatibility for high dynamic range image and video compression forms one of the essential requirements in the transition phase from low dynamic range (LDR) displays to high dynamic range (HDR) displays. In a recent work [1], the problems of tone mapping and HDR video coding are originally fused together in the same mathematical framework, and an optimized solution for tone mapping is achieved in terms of the mean square error (MSE) of the logarithm of luminance values. In this paper, we improve this pioneer study in three aspects by considering its three shortcomings. First, the proposed method [1] works over the logarithms of luminance values which are not uniform with respect to Human Visual System (HVS) sensitivity. We propose to use the perceptually uniform luminance values as an alternative for the optimization of tone mapping curve. Second, the proposed method [1] does not take the quality of the resulting tone mapped images into account during the formulation in contrary to the main goal of tone mapping research. We include the LDR image quality as a constraint to the optimization problem and develop a generic methodology to compromise the trade-off between HDR and LDR image qualities for coding. Third, the proposed method [1] simply applies a low-pass filter to the generated tone curves for video frames to avoid flickering during the adaptation of the method to the video. We instead include an HVS based flickering constraint to the optimization and derive a methodology to compromise the trade-off between the rate-distortion performance and flickering distortion. The superiority of the proposed methodologies is verified with experiments on HDR images and video sequences

    Video Coding Technique using MPEG Compression Standards

    Get PDF
    Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW). The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG) encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM) has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.KEYWORDS: Compression, Simulink, Video and codin

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    • …
    corecore