53 research outputs found

    Efficient Information Reconciliation for Quantum Key Distribution = Reconciliación eficiente de información para la distribución cuántica de claves

    Full text link
    Advances in modern cryptography for secret-key agreement are driving the development of new methods and techniques in key distillation. Most of these developments, focusing on information reconciliation and privacy amplification, are for the direct benefit of quantum key distribution (QKD). In this context, information reconciliation has historically been done using heavily interactive protocols, i.e. with a high number of channel communications, such as the well-known Cascade. In this work we show how modern coding techniques can improve the performance of these methods for information reconciliation in QKD. Here, we propose the use of low-density parity-check (LDPC) codes, since they are good both in efficiency and throughput. A price to pay, a priori, using LDPC codes is that good efficiency is only attained for very long codes and in a very narrow range of error rates. This forces to use several codes in cases when the error rate varies significantly in different uses of the channel, a common situation for instance in QKD. To overcome these problems, this study examines various techniques for adapting LDPC codes, thus reducing the number of codes needed to cover the target range of error rates. These techniques are also used to improve the average efficiency of short-length LDPC codes based on a feedback coding scheme. The importance of short codes lies in the fact that they can be used for high throughput hardware implementations. In a further advancement, a protocol is proposed that avoids the a priori error rate estimation required in other approaches. This blind protocol also brings interesting implications to the finite key analysis. Los avances en la criptografía moderna para el acuerdo de clave secreta están empujando el desarrollo de nuevos métodos y técnicas para la destilación de claves. La mayoría de estos desarrollos, centrados en la reconciliación de información y la amplificación de privacidad, proporcionan un beneficio directo para la distribución cuántica de claves (QKD). En este contexto, la reconciliación de información se ha realizado históricamente por medio de protocolos altamente interativos, es decir, con un alto número de comunicaciones, tal y como ocurre con el protocolo Cascade. En este trabajo mostramos cómo las técnicas de codificación modernas pueden mejorar el rendimiento de estos métodos para la reconciliación de información en QKD. Proponemos el uso de códigos low-density parity-check (LDPC), puesto que estos son buenos tanto en eficiencia como en tasa de corrección. Un precio a pagar, a priori, utilizando códigos LDPC es que una buena eficiencia sólo se alcanza para códigos muy largos y en un rango de error limitado. Este hecho nos obliga a utilizar varios códigos en aquellos casos en los que la tasa de error varía significativamente para distintos usos del canal, una situación común por ejemplo en QKD. Para superar estos problemas, en este trabajo analizamos varias técnicas para la adaptación de códigos LDPC, y así poder reducir el número de códigos necesarios para cubrir el rango de errores deseado. Estas técnicas son también utilizadas para mejorar la eficiencia promedio de códigos LDPC cortos en un esquema de codificación con retroalimentación o realimentación (mensaje de retorno). El interés de los códigos cortos reside en el vii hecho de que estos pueden ser utilizados para implementaciones hardware de alto rendimiento. En un avance posterior, proponemos un nuevo protocolo que evita la estimación inicial de la tasa de error, requerida en otras propuestas. Este protocolo ciego también nos brinda implicaciones interesantes en el análisis de clave finita

    Optimizing HARQ and relay strategies in limited feedback communication systems

    Get PDF
    One of the key challenges for future communication systems is to deal with fast changing channels due to the mobility of users. Having a robust protocol capable of handling transmission failures in unfavorable channel conditions is crucial, but the feedback capacity may be greatly limited due to strict latency requirements. This paper studies the hybrid automatic repeat request (HARQ) techniques involved in re-transmissions when decoding failures occur at the receiver and proposes a scheme that relies on codeword bundling and adaptive incremental redundancy (IR) to maximize the overall throughput in a limited feedback system. In addition to the traditional codeword extension IR bits, this paper introduces a new type of IR, bundle parity bits, obtained from an erasure code across all the codewords in a bundle. The type and number of IR bits to be sent as a response to a decoding failure is optimized through a Markov Decision Process. In addition to the single link analysis, the paper studies how the same techniques generalize to relay and multi-user broadcast systems. Simulation results show that the proposed schemes can provide a significant increase in throughput over traditional HARQ techniques

    Novel LDPC coding and decoding strategies: design, analysis, and algorithms

    Get PDF
    In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues. Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension. Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process. Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity

    Variable Rate Transmission Over Noisy Channels

    Get PDF
    Hybrid automatic repeat request transmission (hybrid ARQ) schemes aim to provide system reliability for transmissions over noisy channels while still maintaining a reasonably high throughput efficiency by combining retransmissions of automatic repeat requests with forward error correction (FEC) coding methods. In type-II hybrid ARQ schemes, the additional parity information required by channel codes to achieve forward error correction is provided only when errors have been detected. Hence, the available bits are partitioned into segments, some of which are sent to the receiver immediately, others are held back and only transmitted upon the detection of errors. This scheme raises two questions. Firstly, how should the available bits be ordered for optimal partitioning into consecutive segments? Secondly, how large should the individual segments be? This thesis aims to provide an answer to both of these questions for the transmission of convolutional and Turbo Codes over additive white Gaussian noise (AWGN), inter-symbol interference (ISI) and Rayleigh channels. Firstly, the ordering of bits is investigated by simulating the transmission of packets split into segments with a size of 1 bit and finding the critical number of bits, i.e. the number of bits where the output of the decoder is error-free. This approach provides a maximum, practical performance limit over a range of signal-to-noise levels. With these practical performance limits, the attention is turned to the size of the individual segments, since packets of 1 bit cause an intolerable overhead and delay. An adaptive, hybrid ARQ system is investigated, in which the transmitter uses the number of bits sent to the receiver and the receiver decoding results to adjust the size of the first, initial, packet and subsequent segments to the conditions of a stationary channel

    Rateless Coding for Gaussian Channels

    Get PDF
    A rateless code-i.e., a rate-compatible family of codes-has the property that codewords of the higher rate codes are prefixes of those of the lower rate ones. A perfect family of such codes is one in which each of the codes in the family is capacity-achieving. We show by construction that perfect rateless codes with low-complexity decoding algorithms exist for additive white Gaussian noise channels. Our construction involves the use of layered encoding and successive decoding, together with repetition using time-varying layer weights. As an illustration of our framework, we design a practical three-rate code family. We further construct rich sets of near-perfect rateless codes within our architecture that require either significantly fewer layers or lower complexity than their perfect counterparts. Variations of the basic construction are also developed, including one for time-varying channels in which there is no a priori stochastic model.Comment: 18 page
    corecore