581 research outputs found

    Discriminated Belief Propagation

    Full text link
    Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase

    A unary error correction code for the near-capacity joint source and channel coding of symbol values from an infinite set

    No full text
    A novel Joint Source and Channel Code (JSCC) is proposed, which we refer to as the Unary Error Correction (UEC) code. Unlike existing JSCCs, our UEC facilitates the practical encoding of symbol values that are selected from a set having an infinite cardinality. Conventionally, these symbols are conveyed using Separate Source and Channel Codes (SSCCs), but we demonstrate that the residual redundancy that is retained following source coding results in a capacity loss, which is found to have a value of 1.11 dB in a particular practical scenario. By contrast, the proposed UEC code can eliminate this capacity loss, or reduce it to an infinitesimally small value. Furthermore, the UEC code has only a moderate complexity, facilitating its employment in practical low-complexity applications

    The trellis complexity of convolutional codes

    Get PDF
    Convolutional codes have a natural, regular, trellis structure that facilitates the implementation of Viterbi's algorithm. Linear block codes also have a natural, though not in general a regular, “minimal” trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of an unenhanced Viterbi decoding algorithm can be accurately estimated by the number of trellis edge symbols per encoded bit. It would therefore appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations which are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the “minimal” trellis representation. Thus ironically, we seem to know more about the minimal trellis representation for block than for convolutional codes. We provide a remedy, by developing a theory of minimal trellises for convolutional codes. This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-canonical generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small

    Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs

    Full text link
    A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear ``local constraint'' codes to be associated with the edges and vertices, respectively, of the graph. The \k-complexity of a graphical realization is defined to be the largest dimension of any of its local constraint codes. \k-complexity is a reasonable measure of the computational complexity of a sum-product decoding algorithm specified by a graphical realization. The main focus of this paper is on the following problem: given a linear code C and a graph G, how small can the \k-complexity of a realization of C on G be? As useful tools for attacking this problem, we introduce the Vertex-Cut Bound, and the notion of ``vc-treewidth'' for a graph, which is closely related to the well-known graph-theoretic notion of treewidth. Using these tools, we derive tight lower bounds on the \k-complexity of any realization of C on G. Our bounds enable us to conclude that good error-correcting codes can have low-complexity realizations only on graphs with large vc-treewidth. Along the way, we also prove the interesting result that the ratio of the \k-complexity of the best conventional trellis realization of a length-n code C to the \k-complexity of the best cycle-free realization of C grows at most logarithmically with codelength n. Such a logarithmic growth rate is, in fact, achievable.Comment: Submitted to IEEE Transactions on Information Theor

    Cyclic division algebras: a tool for space-time coding

    Get PDF
    Multiple antennas at both the transmitter and receiver ends of a wireless digital transmission channel may increase both data rate and reliability. Reliable high rate transmission over such channels can only be achieved through Space–Time coding. Rank and determinant code design criteria have been proposed to enhance diversity and coding gain. The special case of full-diversity criterion requires that the difference of any two distinct codewords has full rank. Extensive work has been done on Space–Time coding, aiming at finding fully diverse codes with high rate. Division algebras have been proposed as a new tool for constructing Space–Time codes, since they are non-commutative algebras that naturally yield linear fully diverse codes. Their algebraic properties can thus be further exploited to improve the design of good codes. The aim of this work is to provide a tutorial introduction to the algebraic tools involved in the design of codes based on cyclic division algebras. The different design criteria involved will be illustrated, including the constellation shaping, the information lossless property, the non-vanishing determinant property, and the diversity multiplexing trade-off. The final target is to give the complete mathematical background underlying the construction of the Golden code and the other Perfect Space–Time block codes

    On Minimal Tree Realizations of Linear Codes

    Full text link
    A tree decomposition of the coordinates of a code is a mapping from the coordinate set to the set of vertices of a tree. A tree decomposition can be extended to a tree realization, i.e., a cycle-free realization of the code on the underlying tree, by specifying a state space at each edge of the tree, and a local constraint code at each vertex of the tree. The constraint complexity of a tree realization is the maximum dimension of any of its local constraint codes. A measure of the complexity of maximum-likelihood decoding for a code is its treewidth, which is the least constraint complexity of any of its tree realizations. It is known that among all tree realizations of a code that extends a given tree decomposition, there exists a unique minimal realization that minimizes the state space dimension at each vertex of the underlying tree. In this paper, we give two new constructions of these minimal realizations. As a by-product of the first construction, a generalization of the state-merging procedure for trellis realizations, we obtain the fact that the minimal tree realization also minimizes the local constraint code dimension at each vertex of the underlying tree. The second construction relies on certain code decomposition techniques that we develop. We further observe that the treewidth of a code is related to a measure of graph complexity, also called treewidth. We exploit this connection to resolve a conjecture of Forney's regarding the gap between the minimum trellis constraint complexity and the treewidth of a code. We present a family of codes for which this gap can be arbitrarily large.Comment: Submitted to IEEE Transactions on Information Theory; 29 pages, 11 figure
    corecore