145 research outputs found

    Blind identification of an unknown interleaved convolutional code

    Full text link
    We give here an efficient method to reconstruct the block interleaver and recover the convolutional code when several noisy interleaved codewords are given. We reconstruct the block interleaver without assumption on its structure. By running some experimental tests we show the efficiency of this method even with moderate noise

    Iterative Equalization and Source Decoding for Vector Quantized Sources

    No full text
    In this contribution an iterative (turbo) channel equalization and source decoding scheme is considered. In our investigations the source is modelled as a Gaussian-Markov source, which is compressed with the aid of vector quantization. The communications channel is modelled as a time-invariant channel contaminated by intersymbol interference (ISI). Since the ISI channel can be viewed as a rate-1 encoder and since the redundancy of the source cannot be perfectly removed by source encoding, a joint channel equalization and source decoding scheme may be employed for enhancing the achievable performance. In our study the channel equalization and the source decoding are operated iteratively on a bit-by-bit basis under the maximum aposteriori (MAP) criterion. The channel equalizer accepts the a priori information provided by the source decoding and also extracts extrinsic information, which in turn acts as a priori information for improving the source decoding performance. Simulation results are presented for characterizing the achievable performance of the iterative channel equalization and source decoding scheme. Our results show that iterative channel equalization and source decoding is capable of achieving an improved performance by efficiently exploiting the residual redundancy of the vector quantization assisted source coding

    Irregular Variable Length Coding

    Get PDF
    In this thesis, we introduce Irregular Variable Length Coding (IrVLC) and investigate its applications, characteristics and performance in the context of digital multimedia broadcast telecommunications. During IrVLC encoding, the multimedia signal is represented using a sequence of concatenated binary codewords. These are selected from a codebook, comprising a number of codewords, which, in turn, comprise various numbers of bits. However, during IrVLC encoding, the multimedia signal is decomposed into particular fractions, each of which is represented using a different codebook. This is in contrast to regular Variable Length Coding (VLC), in which the entire multimedia signal is encoded using the same codebook. The application of IrVLCs to joint source and channel coding is investigated in the context of a video transmission scheme. Our novel video codec represents the video signal using tessellations of Variable-Dimension Vector Quantisation (VDVQ) tiles. These are selected from a codebook, comprising a number of tiles having various dimensions. The selected tessellation of VDVQ tiles is signalled using a corresponding sequence of concatenated codewords from a Variable Length Error Correction (VLEC) codebook. This VLEC codebook represents a specific joint source and channel coding case of VLCs, which facilitates both compression and error correction. However, during video encoding, only particular combinations of the VDVQ tiles will perfectly tessellate, owing to their various dimensions. As a result, only particular sub-sets of the VDVQ codebook and, hence, of the VLEC codebook may be employed to convey particular fractions of the video signal. Therefore, our novel video codec can be said to employ IrVLCs. The employment of IrVLCs to facilitate Unequal Error Protection (UEP) is also demonstrated. This may be applied when various fractions of the source signal have different error sensitivities, as is typical in audio, speech, image and video signals, for example. Here, different VLEC codebooks having appropriately selected error correction capabilities may be employed to encode the particular fractions of the source signal. This approach may be expected to yield a higher reconstruction quality than equal protection in cases where the various fractions of the source signal have different error sensitivities. Finally, this thesis investigates the application of IrVLCs to near-capacity operation using EXtrinsic Information Transfer (EXIT) chart analysis. Here, a number of component VLEC codebooks having different inverted EXIT functions are employed to encode particular fractions of the source symbol frame. We show that the composite inverted IrVLC EXIT function may be obtained as a weighted average of the inverted component VLC EXIT functions. Additionally, EXIT chart matching is employed to shape the inverted IrVLC EXIT function to match the EXIT function of a serially concatenated inner channel code, creating a narrow but still open EXIT chart tunnel. In this way, iterative decoding convergence to an infinitesimally low probability of error is facilitated at near-capacity channel SNRs

    Virheenkorjauskoodien tunnistus signaalitiedustelussa

    Get PDF
    Error correction coding is an integral part of a digital communication system. In signals intelligence the aim is to recover the transmitted messages and part of this task is identifying the used error correction coding method. The purpose of this study is to present a overview of different identification methods of forward error correcting codes and test the performance of these codes in a controlled setting. The codes that are discussed in this work are block codes and convolutional codes with a main focus on low density parity check (LDPC) codes and turbo codes. Test cases for LDPC code identification are presented and remarks about the performance and limits are made.Virheenkorjauskoodit ovat oleellinen osa digitaalista tietoliikennejärjestelmää. Signaalitiedustelussa tavoite on selvittää lähetetty viesti ja osa tätä tehtävää on käytetyn virheenkorjauskoodin selvittäminen. Tämän työn tarkoituksena on esittää yleiskatsaus erilaisiin virheenkorjaukoodien tunnistusmenetelmiin ja testata näiden menetelmien suorituskykyä kontroloiduissa olosuhteissa. Virheenkorjauskoodit, joita käsitellään tässä työssä ovat lohkokoodit ja konvoluutiokoodit ja pääpaino on low density parity check (LDPC) -koodeissa ja turbokoodeissa. LDPC-koodin tunnistamismenetelmien testitulokset esitetään ja menetelmien suorituskykyä ja rajoitteita tarkastellaan

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    Iterative decoding and detection for physical layer network coding

    Get PDF
    PhD ThesisWireless networks comprising multiple relays are very common and it is important that all users are able to exchange messages via relays in the shortest possible time. A promising technique to achieve this is physical layer network coding (PNC), where the time taken to exchange messages between users is achieved by exploiting the interference at the relay due to the multiple incoming signals from the users. At the relay, the interference is demapped to a binary sequence representing the exclusive-OR of both users’ messages. The time to exchange messages is reduced because the relay broadcasts the network coded message to both users, who can then acquire the desired message by applying the exclusive-OR of their original message with the network coded message. However, although PNC can increase throughput it is at the expense of performance degradation due to errors resulting from the demapping of the interference to bits. A number of papers in the literature have investigated PNC with an iterative channel coding scheme in order to improve performance. However, in this thesis the performance of PNC is investigated for end-to-end (E2E) the three most common iterative coding schemes: turbo codes, low-density parity-check (LDPC) codes and trellis bit-interleaved coded modulation with iterative decoding (BICM-ID). It is well known that in most scenarios turbo and LDPC codes perform similarly and can achieve near-Shannon limit performance, whereas BICM-ID does not perform quite as well but has a lower complexity. However, the results in this thesis show that on a two-way relay channel (TWRC) employing PNC, LDPC codes do not perform well and BICM-ID actually outperforms them while also performing comparably with turbo codes. Also presented in this thesis is an extrinsic information transfer (ExIT) chart analysis of the iterative decoders for each coding scheme, which is used to explain this surprising result. Another problem arising from the use of PNC is the transfer of reliable information from the received signal at the relay to the destination nodes. The demapping of the interference to binary bits means that reliability information about the received signal is lost and this results in a significant degradation in performance when applying soft-decision decoding at the destination nodes. This thesis proposes the use of traditional angle modulation (frequency modulation (FM) and phase modulation (PM)) when broadcasting from the relay, where the real and imaginary parts of the complex received symbols at the relay modulate the frequency or phase of a carrier signal, while maintaining a constant envelope. This is important since the complex received values at the relay are more likely to be centred around zero and it undesirable to transmit long sequences of low values due to potential synchronisation problems at the destination nodes. Furthermore, the complex received values, obtained after angle demodulation, are used to derive more reliable log-likelihood ratios (LLRs) of the received symbols at the destination nodes and consequently improve the performance of the iterative decoders for each coding scheme compared with conventionally coded PNC. This thesis makes several important contributions: investigating the performance of different iterative channel coding schemes combined with PNC, presenting an analysis of the behaviour of different iterative decoding algorithms when PNC is employed using ExIT charts, and proposing the use of angle modulation at the relay to transfer reliable information to the destination nodes to improve the performance of the iterative decoding algorithms. The results from this thesis will also be useful for future research projects in the areas of PNC that are currently being addressed, such as synchronisation techniques and receiver design.Iraqi Ministry of Higher Education and Scientific Research
    corecore