8 research outputs found

    Frame Interpolation for Cloud-Based Mobile Video Streaming

    Full text link
    © 2016 IEEE. Cloud-based High Definition (HD) video streaming is becoming popular day by day. On one hand, it is important for both end users and large storage servers to store their huge amount of data at different locations and servers. On the other hand, it is becoming a big challenge for network service providers to provide reliable connectivity to the network users. There have been many studies over cloud-based video streaming for Quality of Experience (QoE) for services like YouTube. Packet losses and bit errors are very common in transmission networks, which affect the user feedback over cloud-based media services. To cover up packet losses and bit errors, Error Concealment (EC) techniques are usually applied at the decoder/receiver side to estimate the lost information. This paper proposes a time-efficient and quality-oriented EC method. The proposed method considers H.265/HEVC based intra-encoded videos for the estimation of whole intra-frame loss. The main emphasis in the proposed approach is the recovery of Motion Vectors (MVs) of a lost frame in real-time. To boost-up the search process for the lost MVs, a bigger block size and searching in parallel are both considered. The simulation results clearly show that our proposed method outperforms the traditional Block Matching Algorithm (BMA) by approximately 2.5 dB and Frame Copy (FC) by up to 12 dB at a packet loss rate of 1%, 3%, and 5% with different Quantization Parameters (QPs). The computational time of the proposed approach outperforms the BMA by approximately 1788 seconds

    Iterative joint source channel decoding for H.264 compressed video transmission

    Get PDF
    In this thesis, the error resilient transmission of H.264 compressed video using Context-based Adaptive Binary Arithmetic Code (CABAC) as the entropy code is examined. The H.264 compressed video is convolutionally encoded and transmitted over an Additive White Gaussian Noise (AWGN) channel. Two iterative joint source-channel decoding schemes are proposed, in which slice candidates that failed semantic verification are exploited. The first proposed scheme uses soft values of bits produced by a soft-input soft-output channel decoder to generate a list of slice candidates for each slice in the compressed video sequence. These slice candidates are semantically verified to choose the best one. A new semantic checking method is proposed, which uses information from slice candidates that failed semantic verification to virtually check the current slice candidate. The second proposed scheme is built on the first one. This scheme also uses slice candidates that failed semantic verification but it uses them to modify soft values of bits at the source decoder before they are fed back into the channel decoder for the next iteration. Simulation results show that both schemes offer improvements in terms of subjective quality and in terms of objective quality using PSNR and BER as measures. Keywords: Video transmission, H.264, semantics, slice candidate, joint source-channel decoding, error resilienc
    corecore