11 research outputs found

    Logarithmic time encoding and decoding of integer error control codes

    Get PDF
    One of the most important characteristics of all error control codes (ECCs) is the complexity of the encoding/decoding algorithms. Today, there are many ECCs that can correct multiple bit errors, but at the price of high encoding/decoding complexity. Among the rare exceptions are integer ECCs (IECCs), whose serial encoding/decoding algorithms run in O(n) time, where n is the codeword length. In this article, we show that IECCs can be encoded/decoded even faster, that is, that their parallel encoding/decoding algorithms have O(log2n) time complexity

    A Practical Nonbinary Decoder for Low-Density Parity-Check Codes with Packet-Sized Symbols

    Get PDF
    This paper presents a practical decoder for regular low-density parity-check (LDPC) codes with flexible packet-sized symbols. The proposed hMP-VSD (Combined hard-decision message-passing with vector symbol decoding) is much less complex than the conventional VSD and has the same decoding performance. Regular LDPC codes with systematic encoding are selected for implementation. The channel is assumed to be the q-ary symmetric channel (q-SC). Different code lengths and column weights of LDPC codes are investigated. The results show that the codes with a column weight of 7 provide the best performance for hMP-VSD, while hMP works best with codes having a column weight of 5. With packet-sized symbols, even the rather short (60, 30) code structure has code lengths of 1,920 to 245,760 bits with symbol sizes of 32 to 4,096 bits. Both the decoder and its encoder were implemented on Raspberry-pi 4 model B boards and these results confirm that the computation time of hMP-VSD is 60% to 70% lower than that of VSD for pe in the range 0.05 to 0.1

    Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels

    Get PDF
    We introduce a double-layer code based on the combination of a low-density parity-check (LDPC) code with the multiple-input multiple-output (MIMO) system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC), has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems

    Compressing Vector OLE

    Get PDF
    Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn a secret linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn a linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of long instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE. We suggest a new approach for fast generation of pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. Our VOLE generators can be used to enhance the efficiency of a host of cryptographic applications. These include secure arithmetic computation and non-interactive zero-knowledge proofs with reusable preprocessing. Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. We provide several constructions that offer tradeoffs between different efficiency measures and the underlying intractability assumptions

    Improving the Bandwidth Efficiency of Multiple Access Channels using Network Coding and Successive Decoding

    Get PDF
    In this thesis, different approaches for improving the bandwidth efficiency of Multiple Access Channels (MAC) have been proposed. Such improvements can be achieved with methods that use network coding, or with methods that implement successive decoding. Both of these two methods have been discussed here. Under the first method, two novel schemes for using network coding in cooperative networks have been proposed. In the first scheme, network coding generates some redundancy in addition to the redundancy that is generated by the channel code. These redundancies are used in an iterative decoding system at the destination. In the second scheme, the output of the channel encoder in each source node is shortened and transmitted. The relay, by use of the network code, sends a compressed version of the parts missing from the original transmission. This facilitates the decoding procedure at the destination. Simulation based optimizations have been developed. The results indicate that in the case of sources with non-identical power levels, both scenarios outperform the non-relay case. The second method, involves a scheme to increase the channel capacity of an existing channel. This increase is made possible by the introduction of a new Raptor coded interfering channel to an existing channel. Through successive decoding at the destination, the data of both main and interfering sources is decoded. We will demonstrate that when some power difference exists, there is a tradeoff between achieved rate and power efficiency. We will also find the optimum power allocation scenario for this tradeoff. Ultimately we propose a power adaptation scheme that allocates the optimal power to the interfering channel based on an estimation of the main channel's condition. Finally, we generalize our work to allow the possibility of decoding either the secondary source data or the main source data first. We will investigate the performance and delay for each decoding scheme. Since the channels are non-orthogonal, it is possible that for some power allocation scenarios, constellation points get erased. To address this problem we use constellation rotation. The constellation map of the secondary source is rotated to increase the average distance between the points in the constellation (resulting from the superposition of the main and interfering sources constellation.) We will also determine the optimum constellation rotation angle for the interfering source analytically and confirm it with simulations

    An Analog Decoder for Turbo-Structured Low-Density Parity-Check Codes

    Get PDF
    In this work, we consider a class of structured regular LDPC codes, called Turbo-Structured LDPC (TS-LDPC). TS-LDPC codes outperform random LDPC codes and have much lower error floor at high Signal-to-Noise Ratio (SNR). In this thesis, Min-Sum (MS) algorithms are adopted in the decoding of TS-LDPC codes due to their low complexity in the implementation. We show that the error performance of the MS-based TS-LDPC decoder is comparable with the Sum-Product (SP) based decoder and the error floor property of TS-LDPC codes is preserved. The TS-LDPC decoding algorithms can be performed by analog or digital circuitry. Analog decoders are preferred in many communication systems due to their potential for higher speed, lower power dissipation and smaller chip area compared to their digital counterparts. In this work, implementation of the (120, 75) MS-based TS-LDPC analog decoder is considered. The decoder chip consists of an analog decoder heart, digital input and digital output blocks. These digital blocks are required to deliver the received signal to the analog decoder heart and transfer the estimated codewords to the off-chip module. The analog decoder heart is an analog processor performing decoding on the Tanner graph of the code. Variable and check nodes are the main building blocks of analog decoder which are designed and evaluated. The check node is the most complicated unit in MS-based decoders. The minimizer circuit, the fundamental block of a check node, is designed to have a good trade-off between speed and accuracy. In addition, the structure of a high degree minimizer is proposed considering the accuracy, speed, power consumption and robustness against mismatch of the check node unit. The measurement results demonstrate that the error performance of the chip is comparable with theory. The SNR loss at Bit-Error-Rate of 10−5 is only 0.2dB compared to the theory while information throughput is 750Mb/s and the energy efficiency of the decoder chip is 17pJ/b. It is shown that the proposed decoder outperforms the analog decoders that have been fabricated to date in the sense of error performance, throughput and energy efficiency. This decoder is the first analog decoder that has ever been implemented in a sub 100-nm technology and it improves the throughput of analog decoders by a factor of 56. This decoder sets a new state-of-the-art in analog decoding

    Cancelamento de interferência em sistemas celulares distribuídos

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaO tema principal desta tese é o problema de cancelamento de interferência para sistemas multi-utilizador, com antenas distribuídas. Como tal, ao iniciar, uma visão geral das principais propriedades de um sistema de antenas distribuídas é apresentada. Esta descrição inclui o estudo analítico do impacto da ligação, dos utilizadores do sistema, a mais antenas distribuídas. Durante essa análise é demonstrado que a propriedade mais importante do sistema para obtenção do ganho máximo, através da ligação de mais antenas de transmissão, é a simetria espacial e que os utilizadores nas fronteiras das células são os mais bene ciados. Tais resultados são comprovados através de simulação. O problema de cancelamento de interferência multi-utilizador é considerado tanto para o caso unidimensional (i.e. sem codi cação) como para o multidimensional (i.e. com codi cação). Para o caso unidimensional um algoritmo de pré-codi cação não-linear é proposto e avaliado, tendo como objectivo a minimização da taxa de erro de bit. Tanto o caso de portadora única como o de multipla-portadora são abordados, bem como o cenário de antenas colocadas e distribuidas. É demonstrado que o esquema proposto pode ser visto como uma extensão do bem conhecido esquema de zeros forçados, cuja desempenho é provado ser um limite inferior para o esquema generalizado. O algoritmo é avaliado, para diferentes cenários, através de simulação, a qual indica desempenho perto do óptimo, com baixa complexidade. Para o caso multi-dimensional um esquema para efectuar "dirty paper coding" binário, tendo como base códigos de dupla camada é proposto. No desenvolvimento deste esquema, a compressão com perdas de informação, é considerada como um subproblema. Resultados de simulação indicam transmissão dedigna proxima do limite de Shannon.This thesis focus on the interference cancellation problem for multiuser distributed antenna systems. As such it starts by giving an overview of the main properties of a distributed antenna system. This overview includes, an analytical investigation of the impact of the connection of additional distributed antennas, to the system users. That analysis shows that the most important system property to reach the maximum gain, with the connection of additional transmit antennas, is spatial symmetry and that the users at the cell borders are the most bene ted. The multiuser interference problem has been considered for both the one dimensional (i.e. without coding) and multidimensional (i.e. with coding) cases. In the unidimensional case, we propose and evaluate a nonlinear precoding algorithm for the minimization of the bit-error-rate, of a multiuser MIMO system. Both the single-carrier and multi-carrier cases are tackled as well as the co-located and distributed scenarios. It is demonstrated that the proposed scheme can be viewed as an extension of the well-known zero-forcing, whose performance is proven to be a lower bound for the generalized scheme. The algorithm was validated extensively through numerical simulations, which indicate a performance close to the optimal, with reduced complexity. For the multi-dimensional case, a binary dirty paper coding scheme, base on bilayer codes, is proposed. In the development of this scheme, we consider the lossy compression of a binary source as a sub-problem. Simulation results indicate reliable transmission close to the Shannon limit
    corecore