15 research outputs found

    A STUDY OF ERASURE CORRECTING CODES

    Get PDF
    This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes

    Synchronization for capacity -approaching coded communication systems

    Get PDF
    The dissertation concentrates on synchronization of capacity approaching error-correction codes that are deployed in noisy channels with very low signal-to-noise ratio (SNR). The major topics are symbol timing synchronization and frame synchronization.;Capacity-approaching error-correction codes, like turbo codes and low-density parity-check (LDPC) codes, are capable of reaching very low bit error rates and frame error rates in noisy channels by iterative decoding. To fully achieve the potential decoding capability of Turbo codes and LDPC codes, proper symbol timing synchronization, frame synchronization and channel state estimation are required. The dissertation proposes a joint estimator of symbol time delay and channel SNR for symbol timing recovery, and a maximum a posteriori (MAP) frame synchronizer for frame synchronization.;Symbol timing recovery is implemented by sampling and interpolation. The received signal is sampled multiple times per symbol period with unknown delay and unknown SNR. A joint estimator estimates the time delay and the SNR. The signal is rebuilt by interpolating available samples using estimated time delay. The intermediate decoding results enable decision-feedback estimation. The estimates of time delay and SNR are refined by iterative processing. This refinement improves the system performance significantly.;Usually the sampling rate is assumed to be a strict integer multiple of the symbol rate. However, in a practical system the local oscillators in the transmitter and the receiver may have random drifts. Therefore the sampling rate is no longer an exact multiple of the symbol rate, and the sampling time follows a random walk. This random walk may harm the system performance severely. The dissertation analyzes the effect of random time walks and proposes to mitigate the effect by overlapped sliding windows and iterative processing.;Frame synchronization is required to find the correct boundaries of codewords. MAP frame synchronization in the sense of minimizing the frame sync failure rate is investigated. The MAP frame synchronizer explores low-density parity-check attributes of the capacity-approaching codes. The accuracy of frame synchronization is adequate for considered coded systems to work reliably under very low SNR

    Virheenkorjauskoodien tunnistus signaalitiedustelussa

    Get PDF
    Error correction coding is an integral part of a digital communication system. In signals intelligence the aim is to recover the transmitted messages and part of this task is identifying the used error correction coding method. The purpose of this study is to present a overview of different identification methods of forward error correcting codes and test the performance of these codes in a controlled setting. The codes that are discussed in this work are block codes and convolutional codes with a main focus on low density parity check (LDPC) codes and turbo codes. Test cases for LDPC code identification are presented and remarks about the performance and limits are made.Virheenkorjauskoodit ovat oleellinen osa digitaalista tietoliikennejärjestelmää. Signaalitiedustelussa tavoite on selvittää lähetetty viesti ja osa tätä tehtävää on käytetyn virheenkorjauskoodin selvittäminen. Tämän työn tarkoituksena on esittää yleiskatsaus erilaisiin virheenkorjaukoodien tunnistusmenetelmiin ja testata näiden menetelmien suorituskykyä kontroloiduissa olosuhteissa. Virheenkorjauskoodit, joita käsitellään tässä työssä ovat lohkokoodit ja konvoluutiokoodit ja pääpaino on low density parity check (LDPC) -koodeissa ja turbokoodeissa. LDPC-koodin tunnistamismenetelmien testitulokset esitetään ja menetelmien suorituskykyä ja rajoitteita tarkastellaan

    Iterative algorithms for lossy source coding

    Get PDF
    Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 65-68).This thesis explores the problems of lossy source coding and information embedding. For lossy source coding, we analyze low density parity check (LDPC) codes and low density generator matrix (LDGM) codes for quantization under a Hamming distortion. We prove that LDPC codes can achieve the rate-distortion function. We also show that the variable node degree of any LDGM code must become unbounded for these codes to come arbitrarily close to the rate-distortion bound. For information embedding, we introduce the double-erasure information embedding channel model. We develop capacity-achieving codes for the double-erasure channel model. Furthermore, we show that our codes can be efficiently encoded and decoded using belief propagation techniques. We also discuss a generalization of the double-erasure model which shows that the double-erasure model is closely related to other models considered in the literature.by Venkat Chandar.M.Eng.and S.B

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    State of the art baseband DSP platforms for Software Defined Radio: A survey

    Get PDF
    Software Defined Radio (SDR) is an innovative approach which is becoming a more and more promising technology for future mobile handsets. Several proposals in the field of embedded systems have been introduced by different universities and industries to support SDR applications. This article presents an overview of current platforms and analyzes the related architectural choices, the current issues in SDR, as well as potential future trends.Peer reviewe

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    GPU-Accelerated Demodulation for a Satellite Ground Station

    Get PDF
    One consequence of the increasing number of small satellite missions is an increasing demand for high data rate downlinks. As the satellites transmit at high data rates, ground-side receivers need to demodulate the transmitted data as quickly as possible. While application specific hardware can be designed, software defined radio solutions for ground stations are attractive for their flexibility, adaptability, and portability. Another industry trend is the increasing use of Graphics Processing Units (GPUs) in general-purpose processing. By performing many operations simultaneously, GPUs are capable of accelerating processing when given a problem that can be implemented in a parallel manner. Furthermore, once a parallel algorithm is implemented, further speedups are possible by increasing hardware resources without need for any revision in the algorithm. This project combines the above ideas by implementing a software defined radio algorithm to quickly demodulate high-speed data on a GPU. It demonstrates the viability of the GPU in software defined radio applications and particularly in the area of fast demodulation

    Techniques for Low-latency in Software-defined Radio-based Networks

    Get PDF
    Decreased budgets have pushed the United States Air Force towards using existing systems in new ways. The use of unmanned aerial vehicle swarms is one example of reuse of existing systems. One problem with the increased utilization of these swarms is the congestion of the electromagnetic spectrum. Software-defined or cognitive radios have been proposed as a basis for a potential robust communications solution. The present research aims to develop and test a genetic algorithm-based cognitive engine to begin looking at real-time engines that could be used in future swarms. Here, latency is the optimization objective of primary importance. In testing the engine, particular items of interest include the number of solutions evaluated in a given bound and the engine\u27s reliability in yielding acceptable network performance. Initial experiments indicate the engine can consider significant portions of the search space within a relatively small bound and that the engine is efficient at finding highly fit solutions. Future work for this research includes evaluating how well high fitness correlates to acceptable performance and testing the engine with additional noise floors
    corecore