8 research outputs found

    Analysis of hybrid-ARQ based relaying protocols under modulation constraints

    Get PDF
    In a seminal paper published in 2001, Caire and Tuninetti derived an information theoretic bound on the throughput of hybrid-ARQ in the presence of block fading. However, the results placed no constraints on the modulation used, and therefore the input to the channel was Gaussian. The purpose of this thesis is to investigate the impact of modulation constraints on the throughput of hybrid-ARQ in a block fading environment. First, we consider the impact of modulation constraints on information outage probability for a block fading channel with a fixed length codeword. Then, we consider the effect of modulation constraints upon the throughput of hybrid-ARQ, where the rate of the codeword varies depending on the instantaneous channel conditions. These theoretical bounds are compared against the simulated performance of HSDPA, a newly standardized hybrid-ARQ protocol that uses QPSK and 16-QAM bit interleaved turbo-coded modulation. The results indicate how much of the difference between HSDPA and the previous unconstrained modulation bound is due to the use of the turbo-code and how much is due to the modulation constraints. (Abstract shortened by UMI.)

    Coding in 802.11 WLANs

    Get PDF
    Forward error correction (FEC) coding is widely used in communication systems to correct transmis- sion errors. In IEEE 802.11a/g transmitters, convolutional codes are used for FEC at the physical (PHY) layer. As is typical in wireless systems, only a limited choice of pre-speci¯ed coding rates is supported. These are implemented in hardware and thus di±cult to change, and the coding rates are selected with point to point operation in mind. This thesis is concerned with using FEC coding in 802.11 WLANs in more interesting ways that are better aligned with application requirements. For example, coding to support multicast tra±c rather than simple point to point tra±c; coding that is cognisant of the multiuser nature of the wireless channel; and coding which takes account of delay requirements as well as losses. We consider layering additional coding on top of the existing 802.11 PHY layer coding, and investigate the tradeo® between higher layer coding and PHY layer modulation and FEC coding as well as MAC layer scheduling. Firstly we consider the joint multicast performance of higher-layer fountain coding concatenated with 802.11a/g OFDM PHY modulation/coding. A study on the optimal choice of PHY rates with and without fountain coding is carried out for standard 802.11 WLANs. We ¯nd that, in contrast to studies in cellular networks, in 802.11a/g WLANs the PHY rate that optimizes uncoded multicast performance is also close to optimal for fountain-coded multicast tra±c. This indicates that in 802.11a/g WLANs cross-layer rate control for higher-layer fountain coding concatenated with physical layer modulation and FEC would bring few bene¯ts. Secondly, using experimental measurements taken in an outdoor environment, we model the chan- nel provided by outdoor 802.11 links as a hybrid binary symmetric/packet erasure channel. This hybrid channel o®ers capacity increases of more than 100% compared to a conventional packet erasure channel (PEC) over a wide range of RSSIs. Based upon the established channel model, we further consider the potential performance gains of adopting a binary symmetric channel (BSC) paradigm for multi-destination aggregations in 802.11 WLANs. We consider two BSC-based higher-layer coding approaches, i.e. superposition coding and a simpler time-sharing coding, for multi-destination aggre- gated packets. The performance results for both unicast and multicast tra±c, taking account of MAC layer overheads, demonstrate that increases in network throughput of more than 100% are possible over a wide range of channel conditions, and that the simpler time-sharing approach yields most of these gains and have minor loss of performance. Finally, we consider the proportional fair allocation of high-layer coding rates and airtimes in 802.11 WLANs, taking link losses and delay constraints into account. We ¯nd that a layered approach of separating MAC scheduling and higher-layer coding rate selection is optimal. The proportional fair coding rate and airtime allocation (i) assigns equal total airtime (i.e. airtime including both successful and failed transmissions) to every station in a WLAN, (ii) the station airtimes sum to unity (ensuring operation at the rate region boundary), and (iii) the optimal coding rate is selected to maximise goodput (treating packets decoded after the delay deadline as losses)

    Advanced constellation and demapper schemes for next generation digital terrestrial television broadcasting systems

    Get PDF
    206 p.Esta tesis presenta un nuevo tipo de constelaciones llamadas no uniformes. Estos esquemas presentan una eficacia de hasta 1,8 dB superior a las utilizadas en los últimos sistemas de comunicaciones de televisión digital terrestre y son extrapolables a cualquier otro sistema de comunicaciones (satélite, móvil, cable¿). Además, este trabajo contribuye al diseño de constelaciones con una nueva metodología que reduce el tiempo de optimización de días/horas (metodologías actuales) a horas/minutos con la misma eficiencia. Todas las constelaciones diseñadas se testean bajo una plataforma creada en esta tesis que simula el estándar de radiodifusión terrestre más avanzado hasta la fecha (ATSC 3.0) bajo condiciones reales de funcionamiento.Por otro lado, para disminuir la latencia de decodificación de estas constelaciones esta tesis propone dos técnicas de detección/demapeo. Una es para constelaciones no uniformes de dos dimensiones la cual disminuye hasta en un 99,7% la complejidad del demapeo sin empeorar el funcionamiento del sistema. La segunda técnica de detección se centra en las constelaciones no uniformes de una dimensión y presenta hasta un 87,5% de reducción de la complejidad del receptor sin pérdidas en el rendimiento.Por último, este trabajo expone un completo estado del arte sobre tipos de constelaciones, modelos de sistema, y diseño/demapeo de constelaciones. Este estudio es el primero realizado en este campo

    Channel coding for highly efficient transmission in wireless local area network

    Get PDF
    Seit ihrer Wiederentdeckung haben die Low Density Parity Check (LDPC) Codes ein hohes Interesse erfahren, da sie mit niedrigem Aufwand für die Dekodierung fast die Kanalkapazität erreichen. Daher sind sie ein vielversprechendes Kanalcodierungsschema für zukünftige drahtlose Anwendungen. Sie weisen allerdings noch den Nachteil eines hohen Enkodierungsaufwandes auf. Die Einwicklung eines mit geringem Aufwand implementierbaren LDPC Codes mit guten Leistungen stellt noch eine große Herausforderung dar. Die Nutzbarkeit der potenziellen Eigenschaften von LDPC-Codes im Bezug auf die technischen Randbedingungen gerade bei drahtlosen lokalen Netzwerken (Wireess Local Area Network - WLAN) wirft dabei besonders interessante Fragestellungen auf. Die vorliegende Dissertation konzentriert sich auf drei große Themen bezüglich der Erforschung von LDPC Codes, nämlich die Charakterisierung des Codes mittels Umfangsmaßverteilung (Girth Degree Distribution), den niedrigen Enkodierungsaufwand mittels strukturierter Codekonstruktion sowie die verbesserte Decodierungskonvergenz mittels eines Zwei-Phasen Dekodierungsverfahrens. Im ersten Teil der Dissertation wird ein neues Konzept zur Beurteilung von Codes eingeführt. Es basiert auf der Umfangsmaßverteilung. Dieses Konzept kombiniert die Ideen des klassischen Konzeptes - basierend auf dem Umfang (Girth) - mit denen des Knotenmaßes (Node Degree) und wird zur Charakterisierung und zur Abschätzung der Leistungsfähigkeit des Codes eingesetzt. Zur Erkennung und Berechnung des Umfangs wird ein einfacher, baumbasierter Suchalgorithmus eingeführt. Dieses Konzept ermöglicht eine effizientere Leistungsabschätzung als das der alleinigen Verwendung des Umfangs. Es wird gezeigt, dass das Umfangsmaß bei der Ermittlung der Leistung des Codes eine wesentlich größere Rolle spielt als der Umfang. Im Rahmen dieser Untersuchungen fällt als weiteres Ergebnis an, dass die Existenz von kurzen Schleifen der Länge 4 die Leistungsfähigkeit des Codes nicht beeinträchtigt. Der zweite Teil der Dissertation beschäftigt sich mit einem einfachen Verfahren für die Konstruktion einer Gruppe von LDPC Codes, die bei einem relativ niedrigen Enkodierungsaufwand dennoch eine gute Leistung aufweist. Die Kombination einer Treppestruktur in Verbindung mit Permutationsmatrizen führt zu einer sehr einfachen Implementierung, ohne dass ein erheblicher Leistungsverlust auftritt. Der resultierende Enkodierer kann ausschließlich mit einer sehr einfachen Schaltung aus Schieberegistern implementiert werden. Die Leistungsfähigkeit des entstehenden Codes ist mit der des unregelmäßigen MacKay-Codes vergleichbar. In kurzer Kodelänge übertreffen sie sogar einige bekannte strukturierte Codes. Allerdings sind die vorgeschlagenen Codes suboptimal im Vergleich mit den optionalen LDPC Codes für WLAN, sofern niedrige Coderaten betrachtet werden. Sie erweisen sich aber als ebenbürtig bei höheren Coderaten. Diese Leistungsfähigkeit wird von den hier vorgeschlagenen Codes mit relativ niedrigem Enkodierungsaufwand erreicht. Letztendlich wird im dritten Teil der Dissertation ist ein Verfahren zur Steigerung der Decodierungskonvergenz beim Einsatz von LDPC Codes in Kombination mit Modulationsverfahren hoher Wertigkeit vorgestellt. Das Zwei-Phasen Dekodierverfahren wird zur Verbesserung der Bit-Zuverlässigkeit im Dekodierungsprozess eingeführt. Dieses bewirkt eine Reduktion der benötigten Dekodierungsschritte ohne Leistungsverlust. Erreicht wird dies durch die Verwendung der Ergebnisse einer ersten Dekodierungsphase als erneute Eingabe für eine zweite Dekodierungsphase. Die optimale Kombination der durchzuführenden Iterationen beider Dekodierungsphasen kann die Anzahl der insgesamt benötigten Iteration im Durchschnitt reduzieren. Dieses Verfahren zeigt seine Wirksamkeit im Wasserfallbereich des Signal-Rausch-Verhältnisses. -Since their rediscovery, Low Density Parity Check (LDPC) codes sparked high interests due to their capacity-approaching performance achieved through their low decoding complexity. Therefore, they are considered as promising scheme for channel coding in future wireless application. However, they still constitute disadvantage in their high encoding complexity. The research on practical LDPC codes with good performance is quite challenging. In this direction their potential characteristics are explored with respect to the technical requirement of wireless local area network (WLAN). This thesis is focused on three topics, which correspond to three major issues in the research of LDPC codes: code characterization with girth degree distribution, low encoding complexity with structured construction, and higher decoding convergence with two-stage decoding scheme. In the first part of the thesis, a novel concept of girth degree is introduced. This concept combines the idea of the classical concept of girth with node degree. The proposed concept is used to characterize the codes and measure their performance. A simple treebased search algorithm is applied to detect and count the girth degree. The proposed concept is more effective than the classical concept of girth in measuring the performance. It shows that the girth degree plays more significant role than the girth it self, in determining the code performance. Furthermore, the existence of short-four-cycles to some extent is not harmful to degrade the code performances. The second part deals with a simple method for constructing a class of LDPC codes, which pose relative low encoding complexity but show good performance. The combination of the stair structure and the permutation matrices, which are constructed based on the proposed method, yields very simple implementation in encoding process within encoder. The resulting encoder can be implemented using relatively simple shiftregister circuits. Their performance is comparable with that of irregular MacKay codes. In short code length, they outperform some well-established structured codes. The performance of the proposed codes is comparable with the optional LDPC codes for WLAN at higher code rates. However, the proposed codes are relatively suboptimal at lower code rate. Such performance is achieved by the proposed codes in lower encoding complexity In the third part, a method for enhancing the decoding convergence for high coded modulation system is introduced. The two-stage decoding scheme is proposed to improve bit reliabilities in decoding process leading to reduced decoding iteration without performance losses. This is achieved by making use of the output from the first decoding stage as the additional input for the second decoding stage. The optimal combination of the maximal iteration of both decoding stages is capable of reducing the average iteration. This method shows its efficiency at the waterfall region of signal-to-noise-ratio

    A super-nyquist architecture for rateless underwater acoustic communication

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 135-136).Oceans cover about 70 percent of Earth's surface. Despite the abundant resources they contain, much of them remain unexplored. Underwater communication plays a key role in the area of deep ocean exploration. It is also essential in the field of the oil and fishing industry, as well as for military use. Although research on communicating wirelessly in the underwater environment began decades ago, it remains a challenging problem due to the oceanic medium, in which dynamic movements of water and rich scattering are commonplace. In this thesis, we develop an architecture for reliably communicating over the underwater acoustic channel. A notable feature of this architecture is its rateless property: the receiver simply collects pieces of transmission until successful decoding is possible. With this, we aim to achieve capacity-approaching communication under a variety of a priori unknown channel conditions. This is done by using a super-Nyquist (SNQ) transmission scheme. Several other important technologies are also part of the design, among them dithered repetition coding, adaptive decision feedback equalization (DFE), and multiple-input multiple-output (MIMO) communication. We present a complete block diagram for the transmitter and receiver architecture for the SNQ scheme. We prove the sufficiency of the architecture for optimality, and we show through analysis and simulation that as the SNQ signaling rate increases, the SNQ scheme is indeed capacity-achieving. At the end, the performance of the proposed SNQ scheme and its transceiver design are tested in physical experiments, whose results show that the SNQ scheme achieves a significant gain in reliable communication rate over conventional (non-SNQ) schemes.by Qing He.S.M

    Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014

    Get PDF
    Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem

    XIII Jornadas de ingeniería telemática (JITEL 2017)

    Full text link
    Las Jornadas de Ingeniería Telemática (JITEL), organizadas por la Asociación de Telemática (ATEL), constituyen un foro propicio de reunión, debate y divulgación para los grupos que imparten docencia e investigan en temas relacionados con las redes y los servicios telemáticos. Con la organización de este evento se pretende fomentar, por un lado el intercambio de experiencias y resultados, además de la comunicación y cooperación entre los grupos de investigación que trabajan en temas relacionados con la telemática. En paralelo a las tradicionales sesiones que caracterizan los congresos científicos, se desea potenciar actividades más abiertas, que estimulen el intercambio de ideas entre los investigadores experimentados y los noveles, así como la creación de vínculos y puntos de encuentro entre los diferentes grupos o equipos de investigación. Para ello, además de invitar a personas relevantes en los campos correspondientes, se van a incluir sesiones de presentación y debate de las líneas y proyectos activos de los mencionados equiposLloret Mauri, J.; Casares Giner, V. (2018). XIII Jornadas de ingeniería telemática (JITEL 2017). Editorial Universitat Politècnica de València. http://hdl.handle.net/10251/97612EDITORIA
    corecore