12 research outputs found

    Improved Sequential MAP estimation of CABAC encoded data with objective adjustment of the complexity/efficiency tradeoff

    No full text
    International audienceThis paper presents an efficient MAP estimator for the joint source-channel decoding of data encoded with a context adaptive binary arithmetic coder (CABAC). The decoding process is compatible with realistic implementations of CABAC in standards like H.264, i.e., handling adaptive probabilities, context modeling and integer arithmetic coding. Soft decoding is obtained using an improved sequential decoding technique, which allows to obtain various tradeoffs between complexity and efficiency. The algorithms are simulated in a context reminiscent of H264. Error detection is realized by exploiting on one side the properties of the binarization scheme and on the other side the redundancy left in the code string. As a result, the CABAC compression efficiency is preserved and no additional redundancy is introduced in the bit stream. Simulation results outline the efficiency of the proposed techniques for encoded data sent over AWGN and UMTS-OFDM channels

    Analytical tools for optimizing the error correction performance of arithmetic codes

    No full text
    International audienceIn joint source-channel arithmetic coding (JSCAC) schemes, additional redundancy may be introduced into an arithmetic source code in order to be more robust against transmission errors. The purpose of this work is to provide analytical tools to predict and evaluate the effectiveness of that redundancy. Integer binary Arithmetic Coding (AC) is modeled by a reduced-state automaton in order to obtain a bit-clock trellis describing the encoding process. Considering AC as a trellis code, distance spectra are then derived. In particular, an algorithm to compute the free distance of an arithmetic code is proposed. The obtained code properties allow to compute upper bounds on both bit error and symbol error probabilities and thus provide an objective criterion to analyze the behavior of JSCAC schemes when used on noisy channels. This criterion is then exploited to design efficient error-correcting arithmetic codes. Simulation results highlight the validity of the theoretical error bounds and show that for equivalent rate and complexity, a simple optimization yields JSCACs that outperform classical tandem schemes at low to medium SNR

    Soft decoding and synchronization of arithmetic codes: application to image transmission over noisy channels

    Full text link

    Hamming distance spectrum of DAC codes for equiprobable binary sources

    Get PDF
    Distributed Arithmetic Coding (DAC) is an effective technique for implementing Slepian-Wolf coding (SWC). It has been shown that a DAC code partitions source space into unequal-size codebooks, so that the overall performance of DAC codes depends on the cardinality and structure of these codebooks. The problem of DAC codebook cardinality has been solved by the so-called Codebook Cardinality Spectrum (CCS). This paper extends the previous work on CCS by studying the problem of DAC codebook structure.We define Hamming Distance Spectrum (HDS) to describe DAC codebook structure and propose a mathematical method to calculate the HDS of DAC codes. The theoretical analyses are verified by experimental results

    Joint coding/decoding techniques and diversity techniques for video and HTML transmission over wireless point/multipoint: a survey

    Get PDF
    I. Introduction The concomitant developments of the Internet, which offers to its users always larger and more evolved contents (from HTML (HyperText Markup Language) files to multimedia applications), and of wireless systems and handhelds integrating them, have progressively convinced a fair share of people of the interest to always be connected. Still, constraints of heterogeneity, reliability, quality and delay over the transmission channels are generally imposed to fulfill the requirements of these new needs and their corresponding economical goals. This implies different theoretical and practical challenges for the digital communications community of the present time. This paper presents a survey of the different techniques existing in the domain of HTML and video stream transmission over erroneous or lossy channels. In particular, the existing techniques on joint source and channel coding and decoding for multimedia or HTML applications are surveyed, as well as the related problems of streaming and downloading files over an IP mobile link. Finally, various diversity techniques that can be considered for such links, from antenna diversity to coding diversity, are presented...L’engouement du grand public pour les applications multimédia sans fil ne cesse de croître depuis le développement d’Internet. Des contraintes d’hétérogénéité de canaux de transmission, de fiabilité, de qualité et de délai sont généralement exigées pour satisfaire les nouveaux besoins applicatifs entraînant ainsi des enjeux économiques importants. À l’heure actuelle, il reste encore un certain nombre de défis pratiques et théoriques lancés par les chercheurs de la communauté des communications numériques. C’est dans ce cadre que s’inscrit le panorama présenté ici. Cet article présente d’une part un état de l’art sur les principales techniques de codage et de décodage conjoint développées dans la littérature pour des applications multimédia de type téléchargement et diffusion de contenu sur lien mobile IP. Sont tout d’abord rappelées des notions fondamentales des communications numériques à savoir le codage de source, le codage de canal ainsi que les théorèmes de Shannon et leurs principales limitations. Les techniques de codage décodage conjoint présentées dans cet article concernent essentiellement celles développées pour des schémas de codage de source faisant intervenir des codes à longueur variable (CLV) notamment les codes d’Huffman, arithmétiques et les codes entropiques universels de type Lempel-Ziv (LZ). Faisant face au problème de la transmission de données (Hypertext Markup Language (HTML) et vidéo) sur un lien sans fil, cet article présente d’autre part un panorama de techniques de diversités plus ou moins complexes en vue d’introduire le nouveau système à multiples antennes d’émission et de réception

    Iterative joint source channel decoding for H.264 compressed video transmission

    Get PDF
    In this thesis, the error resilient transmission of H.264 compressed video using Context-based Adaptive Binary Arithmetic Code (CABAC) as the entropy code is examined. The H.264 compressed video is convolutionally encoded and transmitted over an Additive White Gaussian Noise (AWGN) channel. Two iterative joint source-channel decoding schemes are proposed, in which slice candidates that failed semantic verification are exploited. The first proposed scheme uses soft values of bits produced by a soft-input soft-output channel decoder to generate a list of slice candidates for each slice in the compressed video sequence. These slice candidates are semantically verified to choose the best one. A new semantic checking method is proposed, which uses information from slice candidates that failed semantic verification to virtually check the current slice candidate. The second proposed scheme is built on the first one. This scheme also uses slice candidates that failed semantic verification but it uses them to modify soft values of bits at the source decoder before they are fed back into the channel decoder for the next iteration. Simulation results show that both schemes offer improvements in terms of subjective quality and in terms of objective quality using PSNR and BER as measures. Keywords: Video transmission, H.264, semantics, slice candidate, joint source-channel decoding, error resilienc

    Error Correction Using Natural Language Processing

    Get PDF
    Data reliability is very important in storage systems. To increase data reliability there are techniques based on error-correcting codes(ECCs). These techniques introduce redundant bits into the data stored in the storage system to be able to do error correction. The error correcting codes based error correction has been studied extensively in the past. In this thesis, a new error correction technique based on the redundant information present within the data is studied. To increase the data reliability, an error correction technique based on the characteristics of the text that the data represents is discussed. The focus is on correcting the data that represents English text using the parts-of-speech property associated with the English text. Three approaches, pure count based approach, two-step HMM based approach and bit error minimization based approach, have been implemented. The approach based on two-step HMM has outperformed the other two approaches. Using this newly proposed technique with the existing error correcting techniques would further increase the data reliability and would complement the performance of existing error correcting techniques

    MAP Joint Source-Channel Arithmetic Decoding for Compressed Video

    Get PDF
    In order to have robust video transmission over error prone telecommunication channels several mechanisms are introduced. These mechanisms try to detect, correct or conceal the errors in the received video stream. In this thesis, the performance of the video codec is improved in terms of error rates without increasing overhead in terms of data bit rate. This is done by exploiting the residual syntactic/semantic redundancy inside compressed video along with optimizing the configuration of the state-of-the art entropy coding, i.e., binary arithmetic coding, and optimizing the quantization of the channel output. The thesis is divided into four phases. In the first phase, a breadth-first suboptimal sequential maximum a posteriori (MAP) decoder is employed for joint source-channel arithmetic decoding of H.264 symbols. The proposed decoder uses not only the intentional redundancy inserted via a forbidden symbol (FS) but also exploits residual redundancy by a syntax checker. In contrast to previous methods this is done as each channel bit is decoded. Simulations using intra prediction modes show improvements in error rates, e.g., syntax element error rate reduction by an order of magnitude for channel SNR of 7.33dB. The cost of this improvement is more computational complexity spent on the syntax checking. In the second phase, the configuration of the FS in the symbol set is studied. The delay probability function, i.e., the probability of the number of bits required to detect an error, is calculated for various FS configurations. The probability of missed error detection is calculated as a figure of merit for optimizing the FS configuration. The simulation results show the effectiveness of the proposed figure of merit, and support the FS configuration in which the FS lies entirely between the other information carrying symbols to be the best. In the third phase, a new method for estimating the a priori probability of particular syntax elements is proposed. This estimation is based on the interdependency among the syntax elements that were previously decoded. This estimation is categorized as either reliable or unreliable. The decoder uses this prior information when they are reliable, otherwise the MAP decoder considers that the syntax elements are equiprobable and in turn uses maximum likelihood (ML) decoding. The reliability detection is carried out using a threshold on the local entropy of syntax elements in the neighboring macroblocks. In the last phase, a new measure to assess performance of the channel quantizer is proposed. This measure is based on the statistics of the rank of true candidate among the sorted list of candidates in the MAP decoder. Simulation results shows that a quantizer designed based on the proposed measure is superior to the quantizers designed based on maximum mutual information and minimum mean square error
    corecore