16 research outputs found

    On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum Algorithm

    Get PDF
    This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed. A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD. The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution

    Graph-based techniques for compression and reconstruction of sparse sources

    Get PDF
    The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory. In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited. The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible. Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them. The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible. In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one. Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version

    Multicast MAC extensions for high rate real-time traffic in wireless LANs

    Get PDF
    Nowadays we are rapidly moving from a mainly textual-based to a multimedia-based Internet, for which the widely deployed IEEE 802.11 wireless LANs can be one of the promising candidates to make them available to users anywhere, anytime, on any device. However, it is still a challenge to support group-oriented real-time multimedia services, such as video-on-demand, video conferencing, distance educations, mobile entertainment services, interactive games, etc., in wireless LANs, as the current protocols do not support multicast, in particular they just send multicast packets in open-loop as broadcast packets, i.e., without any possible acknowledgements or retransmissions. In this thesis, we focus on MAC layer reliable multicast approaches which outperform upper layer ones with both shorter delays and higher efficiencies. Different from polling based approaches, which suffer from long delays, low scalabilities and low efficiencies, we explore a feedback jamming mechanism where negative acknowledgement (NACK) frames are allowed from the non-leader receivers to destroy the acknowledgement (ACK) frame from the single leader receiver and prompts retransmissions from the sender. Based on the feedback jamming scheme, we propose two MAC layer multicast error correction protocols, SEQ driven Leader Based Protocol (SEQ-LBP) and Hybrid Leader Based Protocol (HLBP), the former is an Automatic Repeat reQuest (ARQ) scheme while the later combines both ARQ and the packet level Forward Error Correction (FEC). We evaluate the feedback jamming probabilities and the performances of SEQ-LBP and HLBP based on theoretical analyses, NS-2 simulations and experiments on a real test-bed built with consumer wireless LAN cards. Test results confirm the feasibility of the feedback jamming scheme and the outstanding performances of the proposed protocols SEQ-LBP and HLBP, in particular SEQ-LBP is good for small multicast groups due to its short delay, effectiveness and simplicity while HLBP is better for large multicast groups because of its high efficiency and high scalability with respect to the number of receivers per group.Zurzeit vollzieht sich ein schneller Wechsel vom vorwiegend textbasierten zum multimediabasierten Internet. Die weitverbreiteten IEEE 802.11 Drahtlosnetzwerke sind vielversprechende Kandidaten, um das Internet für Nutzer überall, jederzeit und auf jedem Gerät verfügbar zu machen. Die Unterstützung gruppenorientierter Echtzeit-Dienste in drahtlosen lokalen Netzen ist jedoch immer noch eine Herausforderung. Das liegt daran, dass aktuelle Protokolle keinen Multicast unterstützen. Sie senden Multicast-Pakete vielmehr in einer "Open Loop"-Strategie als Broadcast-Pakete, d. h. ohne jegliche Rückmeldung (feedback) oder Paketwiederholungen. In der vorliegenden Arbeit, anders als in den auf Teilnehmereinzelabfragen (polling) basierenden Ansätzen, die unter langen Verzögerungen, geringer Skalierbarkeit und geringer Effizienz leiden, versuchen wir, Multicast-Feedback bestehend aus positiven (ACK) und negativen Bestätigungen (NACK) auf MAC-Layer im selben Zeitfenster zu bündeln. Die übrigen Empfänger können NACK-Frames senden, um das ACK des Leaders zu zerstören und Paketwiederholungen zu veranlassen. Basierend auf einem Feedback-Jamming Schema schlagen wir zwei MAC-Layer-Protokolle für den Fehlerschutz im Multicast vor: Das SEQ-getriebene Leader Based Protocol (SEQ-LBP) und das Hybrid Leader Based Protocol (HLBP). SEQ-LBP ist eines Automatic Repeat reQuest (ARQ) Schema. HLBP kombiniert ARQ und paketbasierte Forward Error Correction (FEC). Wir evaluieren die Leistungsfähigkeit von ACK/NACK jamming, SEQ-LBP und HLBP durch Analysis, Simulationen in NS-2, sowie Experimenten in einer realen Testumgebung mit handelsüblichen WLAN-Karten. Die Testergebnisse bestätigen die Anwendbarkeit der Feedback-Jamming Schemata und die herausragende Leistungsfähigkeit der vorgestellten Protokolle SEQ-LBP und HLBP. SEQ-LBP ist durch seine kurze Verzögerung, seine Effektivität und seine Einfachheit für kleine Multicast-Gruppen nützlich, während HLBP auf Grund seiner hohen Effizienz und Skalierbarkeit im Bezug auf die Größe der Empfänger eher in großen Multicast-Gruppen anzuwenden ist

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Resource optimization for fault-tolerant quantum computing

    Get PDF
    In this thesis we examine a variety of techniques for reducing the resources required for fault-tolerant quantum computation. First, we show how to simplify universal encoded computation by using only transversal gates and standard error correction procedures, circumventing existing no-go theorems. We then show how to simplify ancilla preparation, reducing the cost of error correction by more than a factor of four. Using this optimized ancilla preparation, we develop improved techniques for proving rigorous lower bounds on the noise threshold. Additional overhead can be incurred because quantum algorithms must be translated into sequences of gates that are actually available in the quantum computer. In particular, arbitrary single-qubit rotations must be decomposed into a discrete set of fault-tolerant gates. We find that by using a special class of non-deterministic circuits, the cost of decomposition can be reduced by as much as a factor of four over state-of-the-art techniques, which typically use deterministic circuits. Finally, we examine global optimization of fault-tolerant quantum circuits under physical connectivity constraints. We adapt techniques from VLSI in order to minimize time and space usage for computations in the surface code, and we develop a software prototype to demonstrate the potential savings.Comment: 231 pages, Ph.D. thesis, University of Waterlo
    corecore