61 research outputs found

    New Identification and Decoding Techniques for Low-Density Parity-Check Codes

    Get PDF
    Error-correction coding schemes are indispensable for high-capacity high data-rate communication systems nowadays. Among various channel coding schemes, low-density parity-check (LDPC) codes introduced by pioneer Robert G. Gallager are prominent due to the capacity-approaching and superior error-correcting properties. There is no hard constraint on the code rate of LDPC codes. Consequently, it is ideal to incorporate LDPC codes with various code rate and codeword length in the adaptive modulation and coding (AMC) systems which change the encoder and the modulator adaptively to improve the system throughput. In conventional AMC systems, a dedicated control channel is assigned to coordinate the encoder/decoder changes. A questions then rises: if the AMC system still works when such a control channel is absent. This work gives positive answer to this question by investigating various scenarios consisting of different modulation schemes, such as quadrature-amplitude modulation (QAM), frequency-shift keying (FSK), and different channels, such as additive white Gaussian noise (AWGN) channels and fading channels. On the other hand, LDPC decoding is usually carried out by iterative belief-propagation (BP) algorithms. As LDPC codes become prevalent in advanced communication and storage systems, low-complexity LDPC decoding algorithms are favored in practical applications. In the conventional BP decoding algorithm, the stopping criterion is to check if all the parities are satisfied. This single rule may not be able to identify the undecodable blocks, as a result, the decoding time and power consumption are wasted for executing unnecessary iterations. In this work, we propose a new stopping criterion to identify the undecodable blocks in the early stage of the iterative decoding process. Furthermore, in the conventional BP decoding algorithm, the variable (check) nodes are updated in parallel. It is known that the number of iterations can be reduced by the serial scheduling algorithm. The informed dynamic scheduling (IDS) algorithms were proposed in the existing literatures to further reduce the number of iterations. However, the computational complexity involved in finding the update node in the existing IDS algorithms would not be neglected. In this work, we propose a new efficient IDS scheme which can provide better performance-complexity trade-off compared to the existing IDS ones. In addition, the iterative decoding threshold, which is used for differentiating which LDPC code is better, is investigated in this work. A family of LDPC codes, called LDPC convolutional codes, has drawn a lot of attentions from researchers in recent years due to the threshold saturation phenomenon. The IDT for an LDPC convolutional code may be computationally demanding when the termination length goes to thousand or even approaches infinity, especially for AWGN channels. In this work, we propose a fast IDT estimation algorithm which can greatly reduce the complexity of the IDT calculation for LDPC convolutional codes with arbitrary large termination length (including infinity). By utilizing our new IDT estimation algorithm, the IDTs for LDPC convolutional codes with arbitrary large termination length (including infinity) can be quickly obtained

    Limites práticos de segurança da distribuição de chaves quânticas de variáveis contínuas

    Get PDF
    Discrete Modulation Continuous Variable Quantum Key Distribution (DM-CV-QKD) systems are very attractive for modern quantum cryptography, since they manage to surpass all Gaussian modulation (GM) system’s disadvantages while maintaining the advantages of using CVs. Nonetheless, DM-CV-QKD is still underdeveloped, with a very limited study of large constellations. This work intends to increase the knowledge on DM-CV-QKD systems considering large constellations, namely M-symbol Amplitude Phase Shift Keying (M-APSK) irregular and regular constellations. As such, a complete DM-CV-QKD system was implemented, con sidering collective attacks and reverse reconciliation under the realistic scenario, assuming Bob detains the knowledge of his detector’s noise. Tight security bounds were obtained considering M-APSK constellations and GM, both for the mutual information between Bob and Alice and the Holevo bound between Bob and Eve. M-APSK constellations with binomial distribution can approximate GM’s results for the secret key rate. Without the consideration of the finite size effects (FSEs), the regular constellation 256-APSK (reg. 32) with binomial distribution achieves 242.9 km, only less 7.2 km than GM for a secret key rate of 10¯⁶ photons per symbol. Considering FSEs, 256-APSK (reg. 32) achieves 96.4% of GM’s maximum transmission distance (2.3 times more than 4-PSK), and 78.4% of GM’s maximum compatible excess noise (10.2 times more than 4-PSK). Additionally, larger constellations allow the use of higher values of modulation variance in a practical implementation, i.e., we are no longer subjected to the sub-one limit for the mean number of photons per symbol. The information reconciliation step considering a binary symmetric channel, the sum-product algorithm and multi-edge type low den sity parity check matrices, constructed from the progressive edge growth algorithm, allowed the correction of keys up to 18 km. The consideration of multidimensional reconciliation allows 256-APSK (reg. 32) to reconcile keys up to 55 km. Privacy amplification was carried out considering the application of fast Fourier transforms to the Toeplitz extractor, being unable of extracting keys for more than, approximately, 49 km, almost haft the theoretical value, and for excess noises larger than 0.16 SNU, like the theoretical value.Os sistemas de distribuição de chaves quânticas com variáveis contínuas e modulação discreta (DM-CV-QKD) são muito atrativos para a criptografia quântica moderna, pois conseguem superar todas as desvantagens do sistema com modulação Gaussiana (GM) enquanto mantêm as vantagens do uso de CVs. No entanto, DM-CV-QKD ainda está subdesenvolvida, sendo o estudo de grandes constelações muito reduzido. Este trabalho pretende aumentar o conhecimento sobre os sistemas DM-CV-QKD com constelações grandes, nomeadamente as do tipo M-symbol Amplitude Phase Shift Keying (M-APSK) irregulares e regulares. Com isto, foi simulado um sistema DM-CV-QKD completo, considerando ataques coletivos e reconciliação reversa tendo em conta o cenário realista, assumindo que o Bob co nhece o ruído de seu detetor. Os limites de segurança foram obtidos considerando constelações M-APSK e GM, tanto para a informação mútua entre o Bob e a Alice, quanto para o limite de Holevo entre o Bob e a Eve. As constelações M-APSK com distribuição binomial aproximam-se à GM quanto à taxa de chave secreta. Sem considerar o efeito de tamanho finito (FSE), a constelação regular 256-APSK (reg. 32) com distribuição binomial atinge 242.9 km, apenas menos 7.2 km do que GM para uma taxa de chave secreta de 10¯⁶ fotões por símbolo. Considerando FSEs, a 256-APSK (reg. 32) atinge 96.4% da distância máxima de transmissão para GM (2.3 vezes mais que a 4-PSK), e 78.4% do valor máximo de excesso de ruído compatível para GM (10.2 vezes mais do que a 4-PSK). Adicionalmente, grandes constelações permitem o uso de valores mais altos de variância de modulação em implementações práticas, pelo que deixa de ser necessário um número de fotões por símbolo abaixo de um. A etapa de reconciliação de informação considerou um canal binário simétrico, o algoritmo soma-produto e matrizes multi-edge type low density parity check, construídas a partir do algoritmo progressive edge growth, permitindo a correção de chaves até 18 km. A consideração de reconciliação multidimensional permite que a 256-APSK (reg. 32) reconcilie chaves até 55 km. A amplificação de privacidade foi realizada considerando a aplicação de transformadas de Fourier rápidas ao extrator de Toeplitz, mostrando-se incapaz de extrair chaves para mais de, aproximadamente, 49 km, quase metade do valor teórico, e para excesso de ruído superior a 0.16 SNU, semelhante ao valor teórico.Mestrado em Engenharia Físic

    Superposition Mapping & Related Coding Techniques

    Get PDF
    Since Shannon's landmark paper in 1948, it has been known that the capacity of a Gaussian channel can be achieved if and only if the channel outputs are Gaussian. In the low signal-to-noise ratio (SNR) regime, conventional mapping schemes suffice for approaching the Shannon limit, while in the high SNR regime, these mapping schemes, which produce uniformly distributed symbols, are insufficient to achieve the capacity. To solve this problem, researchers commonly resort to the technique of signal shaping that mends the symbol distribution, which is originally uniform, into a Gaussian-like one. Superposition mapping (SM) refers to a class of mapping techniques which use linear superposition to load binary digits onto finite-alphabet symbols that are suitable for waveform transmission. Different from conventional mapping schemes, the output symbols of a superposition mapper can easily be made Gaussian-like, which effectively eliminates the necessity of active signal shaping. For this reason, superposition mapping is of great interest for theoretical research as well as for practical implementations. It is an attractive alternative to signal shaping for approaching the channel capacity in the high SNR regime. This thesis aims to provide a deep insight into the principles of superposition mapping and to derive guidelines for systems adopting it. Particularly, the influence of power allocation to the system performance, both w.r.t the achievable power efficiency and supportable bandwidth efficiency, is made clear. Considerable effort is spent on finding code structures that are matched to SM. It is shown that currently prevalent code design concepts, which are mostly derived for coded transmission with bijective uniform mapping, do not really fit with superposition mapping, which is often non-bijective and nonuniform. As the main contribution, a novel coding strategy called low-density hybrid-check (LDHC) coding is proposed. LDHC codes are optimal and universally applicable for SM with arbitrary type of power allocation

    Bit-Wise Decoders for Coded Modulation and Broadcast Coded Slotted ALOHA

    Get PDF
    This thesis deals with two aspects of wireless communications. The first aspect is about efficient point-to-point data transmission. To achieve high spectral efficiency, coded modulation, which is a concatenation of higher order modulation with error correction coding, is used. Bit-interleaved coded modulation (BICM) is a pragmatic approach to coded modulation, where soft information on encoded bits is calculated at the receiver and passed to a bit-wise decoder. Soft information is usually obtained in the form of log-likelihood ratios (also known as L-values), calculated using the max-log approximation. In this thesis, we analyze bit-wise decoders for pulse-amplitude modulation (PAM) constellations over the additive white Gaussian noise (AWGN) channel when the max-log approximation is used for calculating L-values. First, we analyze BICM systems from an information theoretic perspective. We prove that the max-log approximation causes information loss for all PAM constellations and labelings with the exception of a symmetric 4-PAM constellation labeled with a Gray code. We then analyze how the max-log approximation affects the generalized mutual information (GMI), which is an achievable rate for a standard BICM decoder. Second, we compare the performance of the standard BICM decoder with that of the ML decoder. We show that, when the signal-to-noise ratio (SNR) goes to infinity, the loss in terms of pairwise error probability is bounded by 1.25 dB for any two codewords. The analysis further shows that the loss is zero for a wide range of linear codes. The second aspect of wireless communications treated in this thesis is multiple channel access. Our main objective here is to provide reliable message exchange between nodes in a wireless ad hoc network with stringent delay constraints. To that end, we propose an uncoordinated medium access control protocol, termed all-to-all broadcast coded slotted ALOHA (B-CSA), that exploits coding over packets at the transmitter side and successive interference cancellation at the receiver side. The protocol resembles low-density parity-check codes and can be analyzed using the theory of codes on graphs. The packet loss rate performance of the protocol exhibits a threshold behavior with distinct error floor and waterfall regions. We derive a tight error floor approximation that is used for the optimization of the protocol. We also show how the error floor approximation can be used to design protocols for networks, where users have different reliability requirements. We use B-CSA in vehicular networks and show that it outperforms carrier sense multiple access currently adopted as the MAC protocol for vehicular communications. Finally, we investigate the possibility of establishing a handshake in vehicular networks by means of B-CSA

    Novel LDPC coding and decoding strategies: design, analysis, and algorithms

    Get PDF
    In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues. Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension. Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process. Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity
    corecore