1,889 research outputs found

    A Cauchy-density-based rate controller for H.264/AVC in low-delay environments

    Get PDF
    The accuracy of the Cauchy probability density function for modeling of the discrete cosine transform coefficient distribution has already been proved for the frame layer of the rate control subsystem of a hybrid video coder. Nevertheless, in some specific applications operating in real-time low-delay environments, a basic unit layer is recommended in order to provide a good trade-off between quality and delay control. In this paper, a novel basic unit bit allocation for H.264/AVC is proposed based on a simplified Cauchy probability density function source modeling. The experimental results show that the proposed algorithm improves the average peak signal-to-noise ratio in 0.28 and 0.35 dB with respect to two well-known rate control schemes, while maintaining similar peak signal-to-noise ratio standard deviation and buffer occupancy evolution

    Cauchy-Density-Based Basic Unit Layer Rate Controller for H.264/AVC

    Get PDF
    The rate control problem has been extensively studied in parallel to the development of the different video coding standards. The bit allocation via Cauchy-density-based rate-distortion (R-D) modeling of the discrete cosine transform (DCT) coefficients has proved to be one of the most accurate solution at picture level. Nevertheless, in some specific applications operating in real-time low-delay environments, a basic unit (BU) layer is recommended in order to provide a good trade-off between picture quality and delay control. In this paper, a novel BU bit allocation for H.264/AVC is proposed based on a simplified Cauchy probability density function (PDF) source modeling. The experimental results are twofold: 1) the proposed rate control algorithm (RCA) achieves an average PSNR improvement of 0.28 dB respect to a well known BU layer RCA, while maintaining a similar buffer occupancy evolution; and 2) It achieves to notably reduce the buffer occupancy fluctuations respect to a well known picture layer RCA, while maintaining similar quality levels.Publicad

    Graded quantization for multiple description coding of compressive measurements

    Get PDF
    Compressed sensing (CS) is an emerging paradigm for acquisition of compressed representations of a sparse signal. Its low complexity is appealing for resource-constrained scenarios like sensor networks. However, such scenarios are often coupled with unreliable communication channels and providing robust transmission of the acquired data to a receiver is an issue. Multiple description coding (MDC) effectively combats channel losses for systems without feedback, thus raising the interest in developing MDC methods explicitly designed for the CS framework, and exploiting its properties. We propose a method called Graded Quantization (CS-GQ) that leverages the democratic property of compressive measurements to effectively implement MDC, and we provide methods to optimize its performance. A novel decoding algorithm based on the alternating directions method of multipliers is derived to reconstruct signals from a limited number of received descriptions. Simulations are performed to assess the performance of CS-GQ against other methods in presence of packet losses. The proposed method is successful at providing robust coding of CS measurements and outperforms other schemes for the considered test metrics

    Averaging versus Chaos in Turbulent Transport?

    Get PDF
    In this paper we analyze the transport of passive tracers by deterministic stationary incompressible flows which can be decomposed over an infinite number of spatial scales without separation between them. It appears that a low order dynamical system related to local Peclet numbers can be extracted from these flows and it controls their transport properties. Its analysis shows that these flows are strongly self-averaging and super-diffusive: the delay τ(r)\tau(r) for any finite number of passive tracers initially close to separate till a distance rr is almost surely anomalously fast (τ(r)∼r2−ν\tau(r)\sim r^{2-\nu}, with ν>0\nu>0). This strong self-averaging property is such that the dissipative power of the flow compensates its convective power at every scale. However as the circulation increase in the eddies the transport behavior of the flow may (discontinuously) bifurcate and become ruled by deterministic chaos: the self-averaging property collapses and advection dominates dissipation. When the flow is anisotropic a new formula describing turbulent conductivity is identified.Comment: Presented at Oberwolfach (October 2002), CIRM (March 2003), Lisbonne (XIV international congress on mathematical physics: July 2003). Submitted on October 2002, to appear in Communications in Mathematical Physics. 45 pages, 7 figure

    Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control

    Get PDF
    Among various transforms, the discrete cosine transform (DCT) is the most widely used one in multimedia compression technologies for different image or video coding standards. During the development of image or video compression, a lot of interest has been attracted to understand the statistical distribution of DCT coefficients, which would be useful to design compression techniques, such as quantization, entropy coding and rate control. Recently, a bi-geometric transparent composite model (BGTCM) has been developed to provide modelling of distribution of DCT coefficients with both simplicity and accuracy. It has been reported that for DCT coefficients obtained from original images, which is applied in image coding, a transparent composite model (TCM) can provide better modelling than Laplacian. In video compression, such as H.264/AVC, DCT is performed on residual images obtained after prediction with different transform sizes. What's more, in high efficiency video coding(HEVC) which is the newest video coding standard, besides DCT as the main transform tool, discrete sine transform (DST) and transform skip (TS) techniques are possibly performed on residual data in small blocks. As such, the distribution of transformed residual data differs from that of transformed original image data. In this thesis, the distribution of coefficients, including those from all DCT, DST and TS blocks, is analysed based on BGTCM. To be specific, firstly, the distribution of all the coefficients from the whole frame is examined. Secondly, in HEVC, the entropy coding is implemented based on the new encoding concept, coefficient group (CG) with size 4*4, where quantized coefficients are encoded with context models based on their scan indices in each CG. To simulate the encoding process, coefficients at the same scan indices among different CGs are grouped together to form a set. Distribution of coefficients in each set is analysed. Based on our result, BGTCM is better than other widely used distributions, such as Laplacian and Cauchy distributions, in both x^2 and KL-divergence testing. Furthermore, unlike the way based on Laplacian and Cauchy distribution, the BGTCM can be used to model rate-quantization (R-Q) and distortion-quantization (D-Q) models without approximation expressions. R-Q and D-Q models based on BGTCM can reflect the distribution of coefficients, which are important in rate control. In video coding, rate control involves these two models to generate a suitable quantization parameter without multi-passes encoding in order to maintain the coding efficiency and to generate required rate to satisfy rate requirement. In this thesis, based on BGTCM, rate control in HEVC is revised with much increase in coding efficiency and decrease in rate fluctuation in terms of rate variance among frames for constant bit rate requirement.1 yea
    • …
    corecore