20 research outputs found

    Error bounds for parallel communication channels

    Get PDF
    Error bounds for parallel communication channel

    On signal design by the R sub 0 criterion for non-white Gaussian noise channels

    Get PDF
    The use of the R sub 0 criterion for modulation system design is investigated for channels with non-white Gaussian noise. A signal space representation of the waveform channel is developed, and the cut-off rate R sub 0 for vector channels with additive nonwhite Gaussian noise and unquantized demodulation is derived. When the signal unput to the channel is a continuous random vector, maximization of R sub 0 with constrained average signal energy leads to a water-filling interpretation of optimal energy distribution in signal space. The necessary condition for a finite signal set to maximize R sub 0 with constrained energy and an equally likely probability assignment of signal vectors is presented, and an algorithm is outlined for numerically computing the optimum signal set. A necessary condition on a constrained energy, finite signal set is found which maximizes a Taylor series approximation of R sub 0. This signal set is compared with the finite signal set which has the water-filling average energy distribution

    Error bounds for parallel communication channels.

    Get PDF
    Bibliography: p. 87-88.Contract no. DA36-039-AMC-03200(E)

    On performance analysis and implementation issues of iterative decoding for graph based codes

    Get PDF
    There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation. A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure. Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length. The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency

    Communication for wideband fading channels : on theory and practice

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 163-167).This dissertation investigates some information theoretic aspects of communication over wideband fading channels and their applicability to design of signaling schemes approaching the wideband capacity limit. This work thus leads to enhanced understanding of wideband fading channel communication, and to the proposal of novel efficient signaling schemes, which perform very close to the optimal limit. The potential and limitations of such signaling schemes are studied. First, the structure of the optimal input signals is investigated for two commonly used channel models: the discrete-time memoryless Rician fading channel and the Rayleigh block fading channel. When the input is subject to an average power constraint. it is shown that the capacity-achieving input amplitude distribution for a Rician channel is discrete with a finite number of mass points in the low SNR regime. A similar discrete structure for the optimal amplitude is proven to hold over the entire SNR range for the average power limited Rayleigh block fading channel. Channels with a peak power constraint are also analyzed. When the input is constrained to have limited peak power, we show that if its Kuhn-Tucker condition satisfies a sufficient condition, the optimal input amplitude is discrete with a finite number of values.(cont.) In the low SNR regime, the discrete structure becomes binary. Next, we consider signaling over general fading models. Multi-tone FSK, a signaling scheme which uses low duty cycle frequency-shift keying signals (essentially orthogonal binary signals, is proposed and shown to be capacity achieving in the widceband limit. Transmission of information over wideband fading channels using Multi-tonc FSK is considered by using both theoretic analysis and numerical simulation. With a finite bandwidth and noncoherent detection, the achievable data rate of the Multi-tone FSK scheme is close to the wideband capacity limit. furthermore, a feedback scheme is proposed for Multi-tone FSK to improve the codeword error performance. It is shown that if the receiver can feedback received signal quality to the transimitter. a significant improvement in codeword error probability can be achieved. Experimental results are also obtained to dlenlonstrate features and practicality of Multi-tone FSK.by Cheng Luo.Ph.D

    Packet data communications over coded CDMA with hybrid type-II ARQ

    Get PDF
    This dissertation presents in-depth investigation of turbo-coded CDNIA systems in packet data communication terminology. It is divided into three parts; (1) CDMA with hybrid FEC/ARQ in deterministic environment, (2) CDMA with hybrid FEC/ARQ in random access environment and (3) an implementation issue on turbo decoding. As a preliminary, the performance of CDMA with hybrid FEC/ARQ is investigated in deterministic environment. It highlights the practically achievable spectral efficiency of CDMA system with turbo codes and the effect of code rates on the performance of systems with MF and LMMSE receivers, respectively. For given ensemble distance spectra of punctured turbo codes, an improved union bound is used to evaluate the error probability of ML turbo decoder with MF receiver and with LMMSE receiver front-end and, then, the corresponding spectral efficiency is computed as a function of system load. In the second part, a generalized analytical framework is first provided to analyze hybrid type-11 ARQ in random access environment. When applying hybrid type-11 ARQ, probability of packet success and packet length is generally different from attempt to attempt. Since the conventional analytical model, customarily employed for ALOHA system with pure or hybrid type-I ARQ, cannot be applied for this case, an expanded analytical model is introduced. It can be regarded as a network of queues and Jackson and Burke\u27s theorems can be applied to simplify the analysis. The second part is further divided into two sub topics, i.e. CDMA slotted ALOHA with hybrid type-11 ARQ using packet combining and CDMA unslotted ALOHA with hybrid type-11 ARQ using code combining. For code combining, the rate compatible punctured turbo (RCPT) codes are examined. In the third part, noticing that the decoding delay is crucial to the fast ARQ, a parallel MAP algorithm is proposed to reduce the computational decoding delay of turbo codes. It utilizes the forward and backward variables computed in the previous iteration to provide boundary distributions for each sub-block MAP decoder. It has at least two advantages over the existing parallel scheme; No performance degradation and No additional computation

    Communication over fading dispersive channels

    Get PDF
    Performance prediction, and error analysis for digital signal transmission over fading dispersive channe

    An Efficient Hardware Implementation of LDPC Decoder

    Get PDF
    Reliable communication over noisy channel is an old but still challenging issues for communication engineers. Low density parity check codes (LDPC) are linear block codes proposed by Robert G. Gallager in 1960. LDPC codes have lesser complexity compared to Turbo-codes. In most recent wireless communication standard, LDPC is used as one of the most popular forward error correction (FEC) codes due to their excellent error-correcting capability. In this thesis we focus on hardware implementation of the LDPC used in Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) standard ratified in 2005. In architecture design of LDPC decoder, because of the structure of DVB-S2, a memory mapping scheme is used that allows 360 functional units implement simultaneously. The functional units are optimized to reduce hardware resource utilization on an FPGA. A novel design of Range addressable look up table (RALUT) for hyperbolic tangent function is proposed that simplifies the LDPC decoding algorithm while the performance remains the same. Commonly, RALUTs are uniformly distributed on input, however, in our proposed method, instead of representing the LUT input uniformly, we use a non-uniform scale assigning more values to those near zero. Zynq XC7Z030, a family of FPGA’s, is used for Evaluation of the complexity of the proposed design. Synthesizes result show the speed increase due to use of LUT method, however, LUT demand more memory. Thus, we decrease the usage of resource by applying RALUT method

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes
    corecore