29 research outputs found

    LDPCA code construction for Slepian-Wolf coding

    Get PDF
    Error correcting codes used for Distributed Source Coding (DSC) generally assume a random distribution of errors. However, in certain DSC applications, prediction of the error distribution is possible and thus this assumption fails, resulting in a sub-optimal performance. This letter considers the construction of rate-adaptive Low-Density Parity-Check (LDPC) codes where the edges of the variable nodes receiving unreliable information are distributed evenly among all the check nodes. Simulation results show that the proposed codes can reduce the gap to the theoretical bounds by up to 56% compared to traditional codes.peer-reviewe

    Low-density parity-check codes for asymmetric distributed source coding

    Get PDF
    The research work is partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta). This scholarship is partly financed by the European Union - European Social Fund (ESF 1.25).Low-Density Parity-Check (LDPC) codes achieve good performance, tending towards the Slepian-Wolf bound, when used as channel codes in Distributed Source Coding (DSC). Most LDPC codes found in literature are designed assuming random distribution of transmission errors. However, certain DSC applications can predict the error location within a certain level of accuracy. This feature can be exploited in order to design application specific LDPC codes to enhance the performance of traditional LDPC codes. This paper proposes a novel architecture for asymmetric DSC where the encoder is able to estimate the location of the errors within the side information. It then interleaves the bits having a high probability of error to the beginning of the codeword. The LDPC codes are designed to provide a higher level of protection to the front bits. Simulation results show that correct localization of errors pushes the performance of the system on average 13.3% closer to the Slepian-Wolf bound, compared to the randomly constructed LDPC codes. If the error localization prediction fails, such that the errors are randomly distributed, the performance is still in line with that of the traditional DSC architecture.peer-reviewe

    Modified distribution of correlation noise for improved Wyner-Ziv video coding performance

    Get PDF
    This research work was partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta) and by European Union - European Social Fund (ESF 1.25).Despite theorems predicting that Distributed Video Coding can achieve the same performance as traditional predictive video coding schemes, the coding efficiency of practical architectures is still far from these bounds. This is attributed to the poor Side Information (SI) estimated at the decoder and to the inability of the channel codes to recover the source at the Slepian-Wolf (SW) limits. This paper tackles the latter issue by recovering the SI bit-planes starting from the most unreliable bit of each coefficient. Most of the mismatch in SI is thus accumulated within the first decoded bit-planes, leaving the last bit-planes with very few or no mismatch. Low-Density Parity-Check Accumulate (LDPCA) codes can then benefit from such compact distribution of correlation noise since they offer a smaller percentage error, from the SW bounds, when mismatch is accumulated in few higher entropy bit-planes. Furthermore, with this setup, most of the last bit-planes can be recovered very effectively using just 8-bit or 16-bit Cyclic Redundancy Codes. Experimental results show that the proposed scheme can reduce the Wyner-Ziv bit-rates by up to 21% compared to the DISCOVER codec.peer-reviewe

    Distributed Video Coding: Iterative Improvements

    Get PDF

    Distributed Video Coding for Multiview and Video-plus-depth Coding

    Get PDF

    High-Quality Symmetric Wyner–Ziv Coding Scheme for Low-Motion Videos

    Get PDF
    Traditional Wyner-Ziv video coding (WZVC) structures require either intra (Key) or Wyner-Ziv (WZ) coding of frames. Unfortunately, keeping the video quality approximately constant implies drastic bit-rate fluctuations because consecutive frames of different types (Key or WZ) present significantly different compression performances. Moreover, certain scenarios severely limit rate fluctuation. This work proposes a WZVC scheme with low bit-rate fluctuations based on a symmetric coding structure. First, this work investigates the performance of a generic nonasymmetric distributed source coding structure, showing that the low-density parity-check accumulate channel decoding method is best suited. This is used as a basis to design a symmetric WZVC scheme in which every input video frame is divided into four parallel subframes through subsampling, and then the subframes are encoded by using a symmetric method. Compared with the traditional asymmetric WZVC scheme, the proposed scheme can achieve higher bit-rate stability over time, which is a great advantage to guarantee a reliable transmission in many wireless communication application environments in which bit-rate fluctuations are strongly constrained. Simulation results show the effectiveness of the proposed symmetric WZVC scheme in maintaining a steady bit rate and quality, as well as a quality comparison with the traditional WZVC scheme

    Improved rate-adaptive codes for distributed video coding

    Get PDF
    The research work is partially funded by the STEPS Malta.This scholarship is partly financed by the European Union - European Social Fund (ESF 1.25).Distributed Video Coding (DVC) is a coding paradigm which shifts the major computational intensive tasks from the encoder to the decoder. Temporal correlation is exploited at the decoder by predicting the Wyner-Ziv (WZ) frames from the adjacent key frames. Compression is then achieved by transmitting just the parity information required to correct the predicted frame and recover the original frame. This paper proposes an algorithm which identifies most of the unreliable bits in the predicted bit planes, by considering the discrepancies in the previously decoded bit plane. The design of the used Low Density Parity Check (LDPC) codes is then biased to provide better protection to the unreliable bits. Simulation results show that, for the same target quality, the proposed scheme can reduce the WZ bit rates by up to 7% compared to traditional schemes.peer-reviewe

    Improved rate-adaptive codes for Distributed Video Coding

    Full text link

    Adaptive Distributed Source Coding Based on Bayesian Inference

    Get PDF
    Distributed Source Coding (DSC) is an important topic for both in information theory and communication. DSC utilizes the correlations among the sources to compress data, and it has the advantages of being simple and easy to carry out. In DSC, Slepian-Wolf (S-W) and Wyner-Ziv (W-Z) are two important problems, which can be classified as lossless compression and loss compression, respectively. Although the lower bounds of the S-W and W-Z problems have been known to researchers for many decades, the code design to achieve the lower bounds is still an open problem. This dissertation focuses on three DSC problems: the adaptive Slepian-Wolf decoding for two binary sources (ASWDTBS) problem, the compression of correlated temperature data of sensor network (CCTDSN) problem and the streamlined genome sequence compression using distributed source coding (SGSCUDSC) problem. For the CCTDSN and SGSCUDSC problems, sources will be converted into the binary expression as the sources in ASWDTBS problem for encoding. The Bayesian inference will be applied to all of these three problems. To efficiently solve these Bayesian inferences, message passing algorithm will be applied. For a discrete variable that takes a small number of values, the belief propagation (BP) algorithm is able to implement the message passing algorithm efficiently. However, the complexity of the BP algorithm increases exponentially with the number of values of the variable. Therefore, the BP algorithm can only deal with discrete variable that takes a small number of values and limited continuous variables. For the more complex variables, deterministic approximation methods are used. These methods, such as the variational Bayes (VB) method and expectation propagation (EP) method, can efficiently incorporated into the message passing algorithm. A virtual binary asymmetric channel (BAC) channel was introduced to model the correlation between the source data and the side information (SI) in ASWDTBS problem, in which two parameters are required to be learned. The two parameters correspond to the crossover probabilities that are 0->1 and 1->0. Based on this model, a factor graph was established that includes LDPC code, source data, SI and both of the crossover probabilities. Since the crossover probabilities are continuous variables, the deterministic approximate inference methods will be incorporated into the message passing algorithm. The proposed algorithm was applied to the synthetic data, and the results showed that the VB-based algorithm achieved much better performance than the performances of the EP-based algorithm and the standard BP algorithm. The poor performance of the EP-based algorithm was also analyzed. For the CCTDSN problem, the temperature data were collected by crossbow sensors. Four sensors were established in different locations of the laboratory and their readings were sent to the common destination. The data from one sensor were used as the SI, and the data from the other 3 sensors were compressed. The decoding algorithm considers both spatial and temporal correlations, which are in the form of Kalman filter in the factor graph. To deal with the mixtures of the discrete messages and the continuous messages (Gaussians) in the Kalman filter region of the factor graph, the EP algorithm was implemented so that all of the messages were approximated by the Gaussian distribution. The testing results on the wireless network have indicated that the proposed algorithm outperforms the prior algorithm. The SGSCUDSC consists of developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require a heavy-client (encoder side) cannot be applied. To tackle this challenge, the DSC theory was carefully examined, and a customized reference-based genome compression protocol was developed to meet the low-complexity need at the client side. Based on the variation between the source and the SI, this protocol will adaptively select either syndrome coding or hash coding to compress variable lengths of code subsequences. The experimental results of the proposed method showed promising performance when compared with the state of the art algorithm (GRS)
    corecore