40 research outputs found

    Dynamic information and constraints in source and channel coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 237-251).This thesis explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback).(cont.) Using duality, we develop a linear complexity algorithm which exploits the feedforward information to achieve the rate-distortion bound. The second part of the thesis focuses on channel dynamics in communication by introducing a new system model to study delay in streaming applications. We first consider an adversarial channel model where at any time the channel may suffer a burst of degraded performance (e.g., due to signal fading, interference, or congestion) and prove a coding theorem for the minimum decoding delay required to recover from such a burst. Our coding theorem illustrates the relationship between the structure of a code, the dynamics of the channel, and the resulting decoding delay. We also consider more general channel dynamics. Specifically, we prove a coding theorem establishing that, for certain collections of channel ensembles, delay-universal codes exist that simultaneously achieve the best delay for any channel in the collection. Practical constructions with low encoding and decoding complexity are described for both cases.(cont.) Finally, we also consider architectures consisting of both source and channel coding which deal with channel dynamics by spreading information over space, frequency, multiple antennas, or alternate transmission paths in a network to avoid coding delays. Specifically, we explore whether the inherent diversity in such parallel channels should be exploited at the application layer via multiple description source coding, at the physical layer via parallel channel coding, or through some combination of joint source-channel coding. For on-off channel models application layer diversity architectures achieve better performance while for channels with a continuous range of reception quality (e.g., additive Gaussian noise channels with Rayleigh fading), the reverse is true. Joint source-channel coding achieves the best of both by performing as well as application layer diversity for on-off channels and as well as physical layer diversity for continuous channels.by Emin Martinian.Ph.D

    Evaluating and improving the performance of video content distribution in lossy networks

    Get PDF
    The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated

    Le code à effacement Mojette : Applications dans les réseaux et dans le Cloud

    Get PDF
    Dans ce travail, je présente l'intérêt du code correcteur à effacement Mojette pour des architectures de stockage distribuées tolérantes aux pannes. De manière générale, l'approche par code permet de réduire d'un facteur 2 le volume de données stockées par rapport à l'approche standard par réplication qui consiste à copier la donnée en autant de fois que l'on suppose de pannes. De manière spécifique, le code à effacement Mojette présente les performances requises pour la lecture et l'écriture de données chaudes i.e très régulièrement sollicitées. Ces performances en entrées/sorties permettent par exemple l'exécution de machines virtuelles sur des données distribuées par le système de fichier RozoFS. En outre, j'effectue un rappel de mes contributions dans le domaine des réseaux auto-organisés de type P2P et ad hoc mobile en présentant respectivement les protocoles P2PWeb et MP-OLSR. L'ensemble de ce travail est le fruit de 5 encadrements doctoraux et de 3 projets collaboratifs majeurs

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Codage dans les réseaux

    Get PDF
    La fiabilité des transmissions est un des principaux problèmes qu’ont à résoudre les concepteurs de systèmes de communication. Parmi les mécanismes de fiabilité, les codes correcteurs d’erreurs permettent de protéger les données transmises de manière pro-active contre les erreurs de transmission. Historiquement, ces codes étaient principalement utilisés sur la couche physique. L’augmentation de la puissance des machines a permis de les intégrer sur les couches hautes des piles de protocoles de communication depuis le milieu des années 90. Cette intégration a ouvert de nouvelles problématiques de recherche. L’une d’entre elles est la conception de codes adaptés aux contraintes des systèmes dans lesquels ils sont intégrés. La première partie des travaux présentés dans ce mémoire concerne ce thème. Nous avons en particulier fait plusieurs propositions pour améliorer les vitesses de codage et de décodage en logiciel des codes MDS (dont les représentants les plus connus sont les codes de Reed- Solomon). Une RFC est en cours de publication à l’IETF sur ce sujet. Une modification de la structure de ces codes nous a permis de les adapter aux transmissions multimédia en introduisant des niveaux de protection variables entre les symboles d’un même mot de code. Enfin, en relâchant au maximum leur structure, nous avons construit un système de codage "à la volée" s’intégrant particulièrement bien dans des protocoles de communication classiques. La seconde thématique concerne la distribution des mécanismes de fiabilité et de la redondance sur les différentes couches protocolaires. Nous avons par exemple étudié la possibilité de laisser des paquets corrompus remonter les couches pour être corrigés ou simplement traités par les couches hautes. Lors de collaborations avec le CNES et Thalès Alénia Space, nous avons étudié le cas des transmissions multimédia de satellites vers des mobiles (SDMB et DVB-SH) en analysant les différentes solutions de distribution de la redondance sur les couches physique, liaison et les couches hautes. Différentes applications de ce travail ont débouché sur le dépôt de 2 brevets. Le dernier volet de nos recherches concerne les applications des codes à effacement. Nous avons présenté des contributions sur l’utilisation de codes à effacement dans les réseaux pairà-pair. En particulier, dès 2002, nous avons montré comment les codes permettaient d’accélérer les temps de téléchargement dans ce type de réseau. Nous avons aussi proposé une application particulière du codage réseau en montrant que cette technique peut réduire les bornes des délais de bout-en-bout des paquets dans des réseaux fournissant des garanties sur la qualité de service
    corecore