91 research outputs found

    How to Achieve the Capacity of Asymmetric Channels

    Full text link
    We survey coding techniques that enable reliable transmission at rates that approach the capacity of an arbitrary discrete memoryless channel. In particular, we take the point of view of modern coding theory and discuss how recent advances in coding for symmetric channels help provide more efficient solutions for the asymmetric case. We consider, in more detail, three basic coding paradigms. The first one is Gallager's scheme that consists of concatenating a linear code with a non-linear mapping so that the input distribution can be appropriately shaped. We explicitly show that both polar codes and spatially coupled codes can be employed in this scenario. Furthermore, we derive a scaling law between the gap to capacity, the cardinality of the input and output alphabets, and the required size of the mapper. The second one is an integrated scheme in which the code is used both for source coding, in order to create codewords distributed according to the capacity-achieving input distribution, and for channel coding, in order to provide error protection. Such a technique has been recently introduced by Honda and Yamamoto in the context of polar codes, and we show how to apply it also to the design of sparse graph codes. The third paradigm is based on an idea of B\"ocherer and Mathar, and separates the two tasks of source coding and channel coding by a chaining construction that binds together several codewords. We present conditions for the source code and the channel code, and we describe how to combine any source code with any channel code that fulfill those conditions, in order to provide capacity-achieving schemes for asymmetric channels. In particular, we show that polar codes, spatially coupled codes, and homophonic codes are suitable as basic building blocks of the proposed coding strategy.Comment: 32 pages, 4 figures, presented in part at Allerton'14 and published in IEEE Trans. Inform. Theor

    Density Evolution for the Design of Non-Binary Low Density Parity Check Codes for Slepian-Wolf Coding

    Get PDF
    International audienceIn this paper, we investigate the problem of designing good non-binary LDPC codes for Slepian-Wolf coding. The design method is based on Density Evolution which gives the asymptotic error probability of the decoder for given code degree distributions. Density Evolution was originally introduced for channel coding under the assumption that the channel is symmetric. In Slepian-Wolf coding, the correlation channel is not necessarily symmetric and the source distribution has to be taken into account. In this paper, we express the non-binary Density Evolution recursion for Slepian-Wolf coding. From Density Evolution, we then perform code degree distribution optimization using an optimization algorithm called differential evolution. Both asymptotic performance evaluation and finite-length simulations show the gain at considering optimized degree distributions for SW coding

    Algebraic approaches to distributed compression and network error correction

    Get PDF
    Algebraic codes have been studied for decades and have extensive applications in communication and storage systems. In this dissertation, we propose several novel algebraic approaches for distributed compression and network error protection problems. In the first part of this dissertation we propose the usage of Reed-Solomon codes for compression of two nonbinary sources. Reed-Solomon codes are easy to design and offer natural rate adaptivity. We compare their performance with multistage LDPC codes and show that algebraic soft-decision decoding of Reed-Solomon codes can be used effectively under certain correlation structures. As part of this work we have proposed a method that adapts list decoding for the problem of syndrome decoding. This in turn allows us to arrive at improved methods for the compression of multicast network coding vectors. When more than two correlated sources are present, we consider a correlation model given by a system of linear equations. We propose a transformation of correlation model and a way to determine proper decoding schedules. Our scheme allows us to exploit more correlations than those in the previous work and the simulation results confirm its better performance. In the second part of this dissertation we study the network protection problem in the presence of adversarial errors and failures. In particular, we consider the usage of network coding for the problem of simultaneous protection of multiple unicast connections, under certain restrictions on the network topology. The proposed scheme allows the sharing of protection resources among multiple unicast connections. Simulations show that our proposed scheme saves network resources by 4%-15% compared to the protection scheme based on simple repetition codes, especially when the number of primary paths is large or the costs for establishing primary paths are high

    Adaptive Distributed Source Coding Based on Bayesian Inference

    Get PDF
    Distributed Source Coding (DSC) is an important topic for both in information theory and communication. DSC utilizes the correlations among the sources to compress data, and it has the advantages of being simple and easy to carry out. In DSC, Slepian-Wolf (S-W) and Wyner-Ziv (W-Z) are two important problems, which can be classified as lossless compression and loss compression, respectively. Although the lower bounds of the S-W and W-Z problems have been known to researchers for many decades, the code design to achieve the lower bounds is still an open problem. This dissertation focuses on three DSC problems: the adaptive Slepian-Wolf decoding for two binary sources (ASWDTBS) problem, the compression of correlated temperature data of sensor network (CCTDSN) problem and the streamlined genome sequence compression using distributed source coding (SGSCUDSC) problem. For the CCTDSN and SGSCUDSC problems, sources will be converted into the binary expression as the sources in ASWDTBS problem for encoding. The Bayesian inference will be applied to all of these three problems. To efficiently solve these Bayesian inferences, message passing algorithm will be applied. For a discrete variable that takes a small number of values, the belief propagation (BP) algorithm is able to implement the message passing algorithm efficiently. However, the complexity of the BP algorithm increases exponentially with the number of values of the variable. Therefore, the BP algorithm can only deal with discrete variable that takes a small number of values and limited continuous variables. For the more complex variables, deterministic approximation methods are used. These methods, such as the variational Bayes (VB) method and expectation propagation (EP) method, can efficiently incorporated into the message passing algorithm. A virtual binary asymmetric channel (BAC) channel was introduced to model the correlation between the source data and the side information (SI) in ASWDTBS problem, in which two parameters are required to be learned. The two parameters correspond to the crossover probabilities that are 0->1 and 1->0. Based on this model, a factor graph was established that includes LDPC code, source data, SI and both of the crossover probabilities. Since the crossover probabilities are continuous variables, the deterministic approximate inference methods will be incorporated into the message passing algorithm. The proposed algorithm was applied to the synthetic data, and the results showed that the VB-based algorithm achieved much better performance than the performances of the EP-based algorithm and the standard BP algorithm. The poor performance of the EP-based algorithm was also analyzed. For the CCTDSN problem, the temperature data were collected by crossbow sensors. Four sensors were established in different locations of the laboratory and their readings were sent to the common destination. The data from one sensor were used as the SI, and the data from the other 3 sensors were compressed. The decoding algorithm considers both spatial and temporal correlations, which are in the form of Kalman filter in the factor graph. To deal with the mixtures of the discrete messages and the continuous messages (Gaussians) in the Kalman filter region of the factor graph, the EP algorithm was implemented so that all of the messages were approximated by the Gaussian distribution. The testing results on the wireless network have indicated that the proposed algorithm outperforms the prior algorithm. The SGSCUDSC consists of developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require a heavy-client (encoder side) cannot be applied. To tackle this challenge, the DSC theory was carefully examined, and a customized reference-based genome compression protocol was developed to meet the low-complexity need at the client side. Based on the variation between the source and the SI, this protocol will adaptively select either syndrome coding or hash coding to compress variable lengths of code subsequences. The experimental results of the proposed method showed promising performance when compared with the state of the art algorithm (GRS)

    Joint Reconstruction of Multi-view Compressed Images

    Full text link
    The distributed representation of correlated multi-view images is an important problem that arise in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed correlated images are jointly decoded in order to improve the reconstruction quality of all the compressed images. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among different cameras. A central decoder first estimates the underlying correlation model from the independently compressed images which will be used for the joint signal recovery. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images that comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be consistent with their compressed versions. We show by experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our proposed algorithm compares advantageously to state-of-the-art distributed coding schemes based on disparity learning and on the DISCOVER

    Distributed Joint Source-Channel Coding With Copula-Function-Based Correlation Modeling for Wireless Sensors Measuring Temperature

    Get PDF
    Wireless sensor networks (WSNs) deployed for temperature monitoring in indoor environments call for systems that perform efficient compression and reliable transmission of the measurements. This is known to be a challenging problem in such deployments, as highly efficient compression mechanisms impose a high computational cost at the encoder. In this paper, we propose a new distributed joint source-channel coding (DJSCC) solution for this problem. Our design allows for efficient compression and error-resilient transmission, with low computational complexity at the sensor. A new Slepian-Wolf code construction, based on non-systematic Raptor codes, is devised that achieves good performance at short code lengths, which are appropriate for temperature monitoring applications. A key contribution of this paper is a novel Copula-function-based modeling approach that accurately expresses the correlation amongst the temperature readings from colocated sensors. Experimental results using a WSN deployment reveal that, for lossless compression, the proposed Copula-function-based model leads to a notable encoding rate reduction (of up to 17.56%) compared with the state-of-the-art model in the literature. Using the proposed model, our DJSCC system achieves significant rate savings (up to 41.81%) against a baseline system that performs arithmetic entropy encoding of the measurements. Moreover, under channel losses, the transmission rate reduction against the state-of-the-art model reaches 19.64%, which leads to energy savings between 18.68% to 24.36% with respect to the baseline system
    • …
    corecore