7 research outputs found

    Distributed coding using punctured quasi-arithmetic codes for memory and memoryless sources

    Get PDF
    This correspondence considers the use of punctured quasi-arithmetic (QA) codes for the Slepian–Wolf problem. These entropy codes are defined by finite state machines for memoryless and first-order memory sources. Puncturing an entropy coded bit-stream leads to an ambiguity at the decoder side. The decoder makes use of a correlated version of the original message in order to remove this ambiguity. A complete distributed source coding (DSC) scheme based on QA encoding with side information at the decoder is presented, together with iterative structures based on QA codes. The proposed schemes are adapted to memoryless and first-order memory sources. Simulation results reveal that the proposed schemes are efficient in terms of decoding performance for short sequences compared to well-known DSC solutions using channel codes.Peer ReviewedPostprint (published version

    Hamming distance spectrum of DAC codes for equiprobable binary sources

    Get PDF
    Distributed Arithmetic Coding (DAC) is an effective technique for implementing Slepian-Wolf coding (SWC). It has been shown that a DAC code partitions source space into unequal-size codebooks, so that the overall performance of DAC codes depends on the cardinality and structure of these codebooks. The problem of DAC codebook cardinality has been solved by the so-called Codebook Cardinality Spectrum (CCS). This paper extends the previous work on CCS by studying the problem of DAC codebook structure.We define Hamming Distance Spectrum (HDS) to describe DAC codebook structure and propose a mathematical method to calculate the HDS of DAC codes. The theoretical analyses are verified by experimental results

    Joint coding/decoding techniques and diversity techniques for video and HTML transmission over wireless point/multipoint: a survey

    Get PDF
    I. Introduction The concomitant developments of the Internet, which offers to its users always larger and more evolved contents (from HTML (HyperText Markup Language) files to multimedia applications), and of wireless systems and handhelds integrating them, have progressively convinced a fair share of people of the interest to always be connected. Still, constraints of heterogeneity, reliability, quality and delay over the transmission channels are generally imposed to fulfill the requirements of these new needs and their corresponding economical goals. This implies different theoretical and practical challenges for the digital communications community of the present time. This paper presents a survey of the different techniques existing in the domain of HTML and video stream transmission over erroneous or lossy channels. In particular, the existing techniques on joint source and channel coding and decoding for multimedia or HTML applications are surveyed, as well as the related problems of streaming and downloading files over an IP mobile link. Finally, various diversity techniques that can be considered for such links, from antenna diversity to coding diversity, are presented...L’engouement du grand public pour les applications multimédia sans fil ne cesse de croître depuis le développement d’Internet. Des contraintes d’hétérogénéité de canaux de transmission, de fiabilité, de qualité et de délai sont généralement exigées pour satisfaire les nouveaux besoins applicatifs entraînant ainsi des enjeux économiques importants. À l’heure actuelle, il reste encore un certain nombre de défis pratiques et théoriques lancés par les chercheurs de la communauté des communications numériques. C’est dans ce cadre que s’inscrit le panorama présenté ici. Cet article présente d’une part un état de l’art sur les principales techniques de codage et de décodage conjoint développées dans la littérature pour des applications multimédia de type téléchargement et diffusion de contenu sur lien mobile IP. Sont tout d’abord rappelées des notions fondamentales des communications numériques à savoir le codage de source, le codage de canal ainsi que les théorèmes de Shannon et leurs principales limitations. Les techniques de codage décodage conjoint présentées dans cet article concernent essentiellement celles développées pour des schémas de codage de source faisant intervenir des codes à longueur variable (CLV) notamment les codes d’Huffman, arithmétiques et les codes entropiques universels de type Lempel-Ziv (LZ). Faisant face au problème de la transmission de données (Hypertext Markup Language (HTML) et vidéo) sur un lien sans fil, cet article présente d’autre part un panorama de techniques de diversités plus ou moins complexes en vue d’introduire le nouveau système à multiples antennes d’émission et de réception

    MAP Joint Source-Channel Arithmetic Decoding for Compressed Video

    Get PDF
    In order to have robust video transmission over error prone telecommunication channels several mechanisms are introduced. These mechanisms try to detect, correct or conceal the errors in the received video stream. In this thesis, the performance of the video codec is improved in terms of error rates without increasing overhead in terms of data bit rate. This is done by exploiting the residual syntactic/semantic redundancy inside compressed video along with optimizing the configuration of the state-of-the art entropy coding, i.e., binary arithmetic coding, and optimizing the quantization of the channel output. The thesis is divided into four phases. In the first phase, a breadth-first suboptimal sequential maximum a posteriori (MAP) decoder is employed for joint source-channel arithmetic decoding of H.264 symbols. The proposed decoder uses not only the intentional redundancy inserted via a forbidden symbol (FS) but also exploits residual redundancy by a syntax checker. In contrast to previous methods this is done as each channel bit is decoded. Simulations using intra prediction modes show improvements in error rates, e.g., syntax element error rate reduction by an order of magnitude for channel SNR of 7.33dB. The cost of this improvement is more computational complexity spent on the syntax checking. In the second phase, the configuration of the FS in the symbol set is studied. The delay probability function, i.e., the probability of the number of bits required to detect an error, is calculated for various FS configurations. The probability of missed error detection is calculated as a figure of merit for optimizing the FS configuration. The simulation results show the effectiveness of the proposed figure of merit, and support the FS configuration in which the FS lies entirely between the other information carrying symbols to be the best. In the third phase, a new method for estimating the a priori probability of particular syntax elements is proposed. This estimation is based on the interdependency among the syntax elements that were previously decoded. This estimation is categorized as either reliable or unreliable. The decoder uses this prior information when they are reliable, otherwise the MAP decoder considers that the syntax elements are equiprobable and in turn uses maximum likelihood (ML) decoding. The reliability detection is carried out using a threshold on the local entropy of syntax elements in the neighboring macroblocks. In the last phase, a new measure to assess performance of the channel quantizer is proposed. This measure is based on the statistics of the rank of true candidate among the sorted list of candidates in the MAP decoder. Simulation results shows that a quantizer designed based on the proposed measure is superior to the quantizers designed based on maximum mutual information and minimum mean square error

    Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    No full text
    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy (excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and -ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.</p
    corecore