423 research outputs found

    On distributed coding, quantization of channel measurements and faster-than-Nyquist signaling

    Get PDF
    This dissertation considers three different aspects of modern digital communication systems and is therefore divided in three parts. The first part is distributed coding. This part deals with source and source- channel code design issues for digital communication systems with many transmitters and one receiver or with one transmitter and one receiver but with side information at the receiver, which is not available at the transmitter. Such problems are attracting attention lately, as they constitute a way of extending the classical point-to-point communication theory to networks. In this first part of this dissertation, novel source and source-channel codes are designed by converting each of the considered distributed coding problems into an equivalent classical channel coding or classical source-channel coding problem. The proposed schemes come very close to the theoretical limits and thus, are able to exhibit some of the gains predicted by network information theory. In the other two parts of this dissertation classical point-to-point digital com- munication systems are considered. The second part is quantization of coded chan- nel measurements at the receiver. Quantization is a way to limit the accuracy of continuous-valued measurements so that they can be processed in the digital domain. Depending on the desired type of processing of the quantized data, different quantizer design criteria should be used. In this second part of this dissertation, the quantized received values from the channel are processed by the receiver, which tries to recover the transmitted information. An exhaustive comparison of several quantization cri- teria for this case are studied providing illuminating insight for this quantizer design problem. The third part of this dissertation is faster-than-Nyquist signaling. The Nyquist rate in classical point-to-point bandwidth-limited digital communication systems is considered as the maximum transmission rate or signaling rate and is equal to twice the bandwidth of the channel. In this last part of the dissertation, we question this Nyquist rate limitation by transmitting at higher signaling rates through the same bandwidth. By mitigating the incurred interference due to the faster-than-Nyquist rates, gains over Nyquist rate systems are obtained

    Irregular Variable Length Coding

    Get PDF
    In this thesis, we introduce Irregular Variable Length Coding (IrVLC) and investigate its applications, characteristics and performance in the context of digital multimedia broadcast telecommunications. During IrVLC encoding, the multimedia signal is represented using a sequence of concatenated binary codewords. These are selected from a codebook, comprising a number of codewords, which, in turn, comprise various numbers of bits. However, during IrVLC encoding, the multimedia signal is decomposed into particular fractions, each of which is represented using a different codebook. This is in contrast to regular Variable Length Coding (VLC), in which the entire multimedia signal is encoded using the same codebook. The application of IrVLCs to joint source and channel coding is investigated in the context of a video transmission scheme. Our novel video codec represents the video signal using tessellations of Variable-Dimension Vector Quantisation (VDVQ) tiles. These are selected from a codebook, comprising a number of tiles having various dimensions. The selected tessellation of VDVQ tiles is signalled using a corresponding sequence of concatenated codewords from a Variable Length Error Correction (VLEC) codebook. This VLEC codebook represents a specific joint source and channel coding case of VLCs, which facilitates both compression and error correction. However, during video encoding, only particular combinations of the VDVQ tiles will perfectly tessellate, owing to their various dimensions. As a result, only particular sub-sets of the VDVQ codebook and, hence, of the VLEC codebook may be employed to convey particular fractions of the video signal. Therefore, our novel video codec can be said to employ IrVLCs. The employment of IrVLCs to facilitate Unequal Error Protection (UEP) is also demonstrated. This may be applied when various fractions of the source signal have different error sensitivities, as is typical in audio, speech, image and video signals, for example. Here, different VLEC codebooks having appropriately selected error correction capabilities may be employed to encode the particular fractions of the source signal. This approach may be expected to yield a higher reconstruction quality than equal protection in cases where the various fractions of the source signal have different error sensitivities. Finally, this thesis investigates the application of IrVLCs to near-capacity operation using EXtrinsic Information Transfer (EXIT) chart analysis. Here, a number of component VLEC codebooks having different inverted EXIT functions are employed to encode particular fractions of the source symbol frame. We show that the composite inverted IrVLC EXIT function may be obtained as a weighted average of the inverted component VLC EXIT functions. Additionally, EXIT chart matching is employed to shape the inverted IrVLC EXIT function to match the EXIT function of a serially concatenated inner channel code, creating a narrow but still open EXIT chart tunnel. In this way, iterative decoding convergence to an infinitesimally low probability of error is facilitated at near-capacity channel SNRs

    Self-concatenated coding for wireless communication systems

    No full text
    In this thesis, we have explored self-concatenated coding schemes that are designed for transmission over Additive White Gaussian Noise (AWGN) and uncorrelated Rayleigh fading channels. We designed both the symbol-based Self-ConcatenatedCodes considered using Trellis Coded Modulation (SECTCM) and bit-based Self- Concatenated Convolutional Codes (SECCC) using a Recursive Systematic Convolutional (RSC) encoder as constituent codes, respectively. The design of these codes was carried out with the aid of Extrinsic Information Transfer (EXIT) charts. The EXIT chart based design has been found an efficient tool in finding the decoding convergence threshold of the constituent codes. Additionally, in order to recover the information loss imposed by employing binary rather than non-binary schemes, a soft decision demapper was introduced in order to exchange extrinsic information withthe SECCC decoder. To analyse this information exchange 3D-EXIT chart analysis was invoked for visualizing the extrinsic information exchange between the proposed Iteratively Decoding aided SECCC and soft-decision demapper (SECCC-ID). Some of the proposed SECTCM, SECCC and SECCC-ID schemes perform within about 1 dB from the AWGN and Rayleigh fading channels’ capacity. A union bound analysis of SECCC codes was carried out to find the corresponding Bit Error Ratio (BER) floors. The union bound of SECCCs was derived for communications over both AWGN and uncorrelated Rayleigh fading channels, based on a novel interleaver concept.Application of SECCCs in both UltraWideBand (UWB) and state-of-the-art video-telephone schemes demonstrated its practical benefits.In order to further exploit the benefits of the low complexity design offered by SECCCs we explored their application in a distributed coding scheme designed for cooperative communications, where iterative detection is employed by exchanging extrinsic information between the decoders of SECCC and RSC at the destination. In the first transmission period of cooperation, the relay receives the potentially erroneous data and attempts to recover the information. The recovered information is then re-encoded at the relay using an RSC encoder. In the second transmission period this information is then retransmitted to the destination. The resultant symbols transmitted from the source and relay nodes can be viewed as the coded symbols of a three-component parallel-concatenated encoder. At the destination a Distributed Binary Self-Concatenated Coding scheme using Iterative Decoding (DSECCC-ID) was employed, where the two decoders (SECCC and RSC) exchange their extrinsic information. It was shown that the DSECCC-ID is a low-complexity scheme, yet capable of approaching the Discrete-input Continuous-output Memoryless Channels’s (DCMC) capacity.Finally, we considered coding schemes designed for two nodes communicating with each other with the aid of a relay node, where the relay receives information from the two nodes in the first transmission period. At the relay node we combine a powerful Superposition Coding (SPC) scheme with SECCC. It is assumed that decoding errors may be encountered at the relay node. The relay node then broadcasts this information in the second transmission period after re-encoding it, again, using a SECCC encoder. At the destination, the amalgamated block of Successive Interference Cancellation (SIC) scheme combined with SECCC then detects and decodes the signal either with or without the aid of a priori information. Our simulation results demonstrate that the proposed scheme is capable of reliably operating at a low BER for transmission over both AWGN and uncorrelated Rayleigh fading channels. We compare the proposed scheme’s performance to a direct transmission link between the two sources having the same throughput

    Orthogonal Multiple Access with Correlated Sources: Feasible Region and Pragmatic Schemes

    Full text link
    In this paper, we consider orthogonal multiple access coding schemes, where correlated sources are encoded in a distributed fashion and transmitted, through additive white Gaussian noise (AWGN) channels, to an access point (AP). At the AP, component decoders, associated with the source encoders, iteratively exchange soft information by taking into account the source correlation. The first goal of this paper is to investigate the ultimate achievable performance limits in terms of a multi-dimensional feasible region in the space of channel parameters, deriving insights on the impact of the number of sources. The second goal is the design of pragmatic schemes, where the sources use "off-the-shelf" channel codes. In order to analyze the performance of given coding schemes, we propose an extrinsic information transfer (EXIT)-based approach, which allows to determine the corresponding multi-dimensional feasible regions. On the basis of the proposed analytical framework, the performance of pragmatic coded schemes, based on serially concatenated convolutional codes (SCCCs), is discussed

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    On some new approaches to practical Slepian-Wolf compression inspired by channel coding

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y
    corecore