172 research outputs found

    Macroblock level rate and distortion estimation applied to the computation of the Lagrange multiplier in H.264 compression

    Get PDF
    The optimal value of Lagrange multiplier, a trade-off factor between the conveyed rate and distortion measured at the signal reconstruction has been a fundamental problem of rate distortion theory and video compression in particular. The H.264 standard does not specify how to determine the optimal combination of the quantization parameter (QP) values and encoding choices (motion vectors, mode decision). So far, the encoding process is still subject to the static value of Lagrange multiplier, having an exponential dependence on QP as adopted by the scientific community. However, this static value cannot accommodate the diversity of video sequences. Determining its optimal value is still a challenge for current research. In this thesis, we propose a novel algorithm that dynamically adapts the Lagrange multiplier to the video input by using the distribution of the transformed residuals at the macroblock level, expected to result in an improved compression performance in the rate-distortion space. We apply several models to the transformed residuals (Laplace, Gaussian, generic probability density function) at the macroblock level to estimate the rate and distortion, and study how well they fit the actual values. We then analyze the benefits and drawbacks of a few simple models (Laplace and a mixture of Laplace and Gaussian) from the standpoint of acquired compression gain versus visual improvement in connection to the H.264 standard. Rather than computing the Lagrange multiplier based on a model applied to the whole frame, as proposed in the state-of-the-art, we compute it based on models applied at the macroblock level. The new algorithm estimates, from the macroblock’s transformed residuals, its rate and distortion and then combines the contribution of each to compute the frame’s Lagrange multiplier. The experiments on various types of videos showed that the distortion calculated at the macroblock level approaches the real one delivered by the reference software for most sequences tested, although a reliable rate model is still lacking especially at low bit rate. Nevertheless, the results obtained from compressing various video sequences show that the proposed method performs significantly better than the H.264 Joint Model and is slightly better than state-of-the-art methods

    Effective network grid synthesis and optimization for high performance very large scale integration system design

    Get PDF
    制度:新 ; 文部省報告番号:甲2642号 ; 学位の種類:博士(工学) ; 授与年月日:2008/3/15 ; 早大学位記番号:新480

    Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije

    Get PDF
    Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard
    corecore