113 research outputs found

    Distributed Video Coding: Iterative Improvements

    Get PDF

    Distributed source-channel coding using reduced-complexity syndrome-based TTCM

    No full text
    In the context of distributed joint source-channel coding, we conceive reduced-complexity turbo trellis coded modulation (TTCM)-aided syndrome-based block decoding for estimating the cross-over probability pe of the binary symmetric channel, which models the correlation between a pair of sources. Our joint decoder achieves an accurate correlation estimation for varying correlation coefficients at 3 dB lower SNR, than conventional TTCM decoder, despite its considerable complexity reduction

    Adaptive Distributed Source Coding Based on Bayesian Inference

    Get PDF
    Distributed Source Coding (DSC) is an important topic for both in information theory and communication. DSC utilizes the correlations among the sources to compress data, and it has the advantages of being simple and easy to carry out. In DSC, Slepian-Wolf (S-W) and Wyner-Ziv (W-Z) are two important problems, which can be classified as lossless compression and loss compression, respectively. Although the lower bounds of the S-W and W-Z problems have been known to researchers for many decades, the code design to achieve the lower bounds is still an open problem. This dissertation focuses on three DSC problems: the adaptive Slepian-Wolf decoding for two binary sources (ASWDTBS) problem, the compression of correlated temperature data of sensor network (CCTDSN) problem and the streamlined genome sequence compression using distributed source coding (SGSCUDSC) problem. For the CCTDSN and SGSCUDSC problems, sources will be converted into the binary expression as the sources in ASWDTBS problem for encoding. The Bayesian inference will be applied to all of these three problems. To efficiently solve these Bayesian inferences, message passing algorithm will be applied. For a discrete variable that takes a small number of values, the belief propagation (BP) algorithm is able to implement the message passing algorithm efficiently. However, the complexity of the BP algorithm increases exponentially with the number of values of the variable. Therefore, the BP algorithm can only deal with discrete variable that takes a small number of values and limited continuous variables. For the more complex variables, deterministic approximation methods are used. These methods, such as the variational Bayes (VB) method and expectation propagation (EP) method, can efficiently incorporated into the message passing algorithm. A virtual binary asymmetric channel (BAC) channel was introduced to model the correlation between the source data and the side information (SI) in ASWDTBS problem, in which two parameters are required to be learned. The two parameters correspond to the crossover probabilities that are 0->1 and 1->0. Based on this model, a factor graph was established that includes LDPC code, source data, SI and both of the crossover probabilities. Since the crossover probabilities are continuous variables, the deterministic approximate inference methods will be incorporated into the message passing algorithm. The proposed algorithm was applied to the synthetic data, and the results showed that the VB-based algorithm achieved much better performance than the performances of the EP-based algorithm and the standard BP algorithm. The poor performance of the EP-based algorithm was also analyzed. For the CCTDSN problem, the temperature data were collected by crossbow sensors. Four sensors were established in different locations of the laboratory and their readings were sent to the common destination. The data from one sensor were used as the SI, and the data from the other 3 sensors were compressed. The decoding algorithm considers both spatial and temporal correlations, which are in the form of Kalman filter in the factor graph. To deal with the mixtures of the discrete messages and the continuous messages (Gaussians) in the Kalman filter region of the factor graph, the EP algorithm was implemented so that all of the messages were approximated by the Gaussian distribution. The testing results on the wireless network have indicated that the proposed algorithm outperforms the prior algorithm. The SGSCUDSC consists of developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require a heavy-client (encoder side) cannot be applied. To tackle this challenge, the DSC theory was carefully examined, and a customized reference-based genome compression protocol was developed to meet the low-complexity need at the client side. Based on the variation between the source and the SI, this protocol will adaptively select either syndrome coding or hash coding to compress variable lengths of code subsequences. The experimental results of the proposed method showed promising performance when compared with the state of the art algorithm (GRS)

    ADAPTIVE AND SECURE DISTRIBUTED SOURCE CODING FOR VIDEO AND IMAGE COMPRESSION

    Get PDF
    Distributed Video Coding (DVC) is rapidly gaining popularity as a low cost, robust video coding solution, that reduces video encoding complexity. DVC is built on Distributed Source Coding (DSC) principles where correlation between sources to be compressed is exploited at the decoder side. In the case of DVC, a current frame available only at the encoder is estimated at the decoder with side information (SI) generated from other frames available at the decoder. The inter-frame correlation in DVC is then explored at the decoder based on the received syndromes of Wyner-Ziv (WZ) frame and SI frame. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations.Generally, the existing correlation estimation methods in DVC can be classified into two main types: online estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms online estimation techniques with the cost of increased decoding complexity.In order to exploit the robustness of DVC code designs, I integrate particle filtering with standard belief propagation decoding for inference on one joint factor graph to estimate correlation among source and side information. Correlation estimation is performed OTF as it is carried out jointly with decoding of the graph-based DSC code. Moreover, I demonstrate our proposed scheme within state-of-the-art DVC systems, which are transform-domain based with a feedback channel for rate adaptation. Experimental results show that our proposed system gives a significant performance improvement compared to the benchmark state-of-the-art DISCOVER codec (including correlation estimation) and the case without dynamic particle filtering tracking, due to improved knowledge of timely correlation statistics via the combination of joint bit-plane decoding and particle-based BP tracking.Although sampling (e.g., particle filtering) based OTF correlation advances performances of DVC, it also introduces significant computational overhead and results in the decoding delay of DVC. Therefore, I tackle this difficulty through a low complexity adaptive DVC scheme using the deterministic approximate inference, where correlation estimation is also performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code but with much lower complexity. The proposed adaptive DVC scheme is based on expectation propagation (EP), which generally offers better tradeoff between accuracy and complexity among different deterministic approximate inference methods. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.Finally, I extend the concept of DVC (i.e., exploring inter-frames correlation at the decoder side) to the compression of biomedical imaging data (e.g., CT sequence) in a lossless setup, where each slide of a CT sequence is analogous to a frame of video sequence. Besides compression efficiency, another important concern of biomedical imaging data is the privacy and security. Ideally, biomedical data should be kept in a secure manner (i.e. encrypted).An intuitive way is to compress the encrypted biomedical data directly. Unfortunately, traditional compression algorithms (removing redundancy through exploiting the structure of data) fail to handle encrypted data. The reason is that encrypted data appear to be random and lack the structure in the original data. The "best" practice has been compressing the data before encryption, however, this is not appropriate for privacy related scenarios (e.g., biomedical application), where one wants to process data while keeping them encrypted and safe. In this dissertation, I develop a Secure Privacy-presERving Medical Image CompRessiOn (SUPERMICRO) framework based on DSC, which makes the compression of the encrypted data possible without compromising security and compression efficiency. Our approach guarantees the data transmission and storage in a privacy-preserving manner. I tested our proposed framework on two CT image sequences and compared it with the state-of-the-art JPEG 2000 lossless compression. Experimental results demonstrated that the SUPERMICRO framework provides enhanced security and privacy protection, as well as high compression performance

    Side information exploitation, quality control and low complexity implementation for distributed video coding

    Get PDF
    Distributed video coding (DVC) is a new video coding methodology that shifts the highly complex motion search components from the encoder to the decoder, such a video coder would have a great advantage in encoding speed and it is still able to achieve similar rate-distortion performance as the conventional coding solutions. Applications include wireless video sensor networks, mobile video cameras and wireless video surveillance, etc. Although many progresses have been made in DVC over the past ten years, there is still a gap in RD performance between conventional video coding solutions and DVC. The latest development of DVC is still far from standardization and practical use. The key problems remain in the areas such as accurate and efficient side information generation and refinement, quality control between Wyner-Ziv frames and key frames, correlation noise modelling and decoder complexity, etc. Under this context, this thesis proposes solutions to improve the state-of-the-art side information refinement schemes, enable consistent quality control over decoded frames during coding process and implement highly efficient DVC codec. This thesis investigates the impact of reference frames on side information generation and reveals that reference frames have the potential to be better side information than the extensively used interpolated frames. Based on this investigation, we also propose a motion range prediction (MRP) method to exploit reference frames and precisely guide the statistical motion learning process. Extensive simulation results show that choosing reference frames as SI performs competitively, and sometimes even better than interpolated frames. Furthermore, the proposed MRP method is shown to significantly reduce the decoding complexity without degrading any RD performance. To minimize the block artifacts and achieve consistent improvement in both subjective and objective quality of side information, we propose a novel side information synthesis framework working on pixel granularity. We synthesize the SI at pixel level to minimize the block artifacts and adaptively change the correlation noise model according to the new SI. Furthermore, we have fully implemented a state-of-the-art DVC decoder with the proposed framework using serial and parallel processing technologies to identify bottlenecks and areas to further reduce the decoding complexity, which is another major challenge for future practical DVC system deployments. The performance is evaluated based on the latest transform domain DVC codec and compared with different standard codecs. Extensive experimental results show substantial and consistent rate-distortion gains over standard video codecs and significant speedup over serial implementation. In order to bring the state-of-the-art DVC one step closer to practical use, we address the problem of distortion variation introduced by typical rate control algorithms, especially in a variable bit rate environment. Simulation results show that the proposed quality control algorithm is capable to meet user defined target distortion and maintain a rather small variation for sequence with slow motion and performs similar to fixed quantization for fast motion sequence at the cost of some RD performance. Finally, we propose the first implementation of a distributed video encoder on a Texas Instruments TMS320DM6437 digital signal processor. The WZ encoder is efficiently implemented, using rate adaptive low-density-parity-check accumulative (LDPCA) codes, exploiting the hardware features and optimization techniques to improve the overall performance. Implementation results show that the WZ encoder is able to encode at 134M instruction cycles per QCIF frame on a TMS320DM6437 DSP running at 700MHz. This results in encoder speed 29 times faster than non-optimized encoder implementation. We also implemented a highly efficient DVC decoder using both serial and parallel technology based on a PC-HPC (high performance cluster) architecture, where the encoder is running in a general purpose PC and the decoder is running in a multicore HPC. The experimental results show that the parallelized decoder can achieve about 10 times speedup under various bit-rates and GOP sizes compared to the serial implementation and significant RD gains with regards to the state-of-the-art DISCOVER codec

    Large-Scale Face Image Retrieval: A Wyner-Ziv Coding Approach

    Get PDF

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    ADAPTIVE CHANNEL AND SOURCE CODING USING APPROXIMATE INFERENCE

    Get PDF
    Channel coding and source coding are two important problems in communications. Although both channel coding and source coding (especially, the distributed source coding (DSC)) can achieve their ultimate performance by knowing the perfect knowledge of channel noise and source correlation, respectively, such information may not be always available at the decoder side. The reasons might be because of the time−varying characteristic of some communication systems and sources themselves, respectively. In this dissertation, I mainly focus on the study of online channel noise estimation and correlation estimation by using both stochastic and deterministic approximation inferences on factor graphs.In channel coding, belief propagation (BP) is a powerful algorithm to decode low−density parity check (LDPC) codes over additive white Gaussian noise (AWGN) channels. However, the traditional BP algorithm cannot adapt efficiently to the statistical change of SNR in an AWGN channel. To solve the problem, two common workarounds in approximate inference are stochastic methods (e.g. particle filtering (PF)) and deterministic methods (e.g. expectation approximation (EP)). Generally, deterministic methods are much faster than stochastic methods. In contrast, stochastic methods are more flexible and suitable for any distribution. In this dissertation, I proposed two adaptive LDPC decoding schemes, which are able to perform online estimation of time−varying channel state information (especially signal to noise ratio (SNR)) at the bit−level by incorporating PF and EP algorithms. Through experimental results, I compare the performance between the proposed PF based and EP based approaches, which shows that the EP based approach obtains the comparable estimation accuracy with less computational complexity than the PF based method for both stationary and time−varying SNR, and enhances the BP decoding performance simultaneously. Moreover, the EP estimator shows a very fast convergence speed, and the additional computational overhead of the proposed decoder is less than 10% of the standard BP decoder.Moreover, since the close relationship between source coding and channel coding, the proposed ideas are extended to source correlation estimation. First, I study the correlation estimation problem in lossless DSC setup, where I consider both asymmetric and non−asymmetric SW coding of two binary correlated sources. The aforementioned PF and EP based approaches are extended to handle the correlation between two binary sources, where the relationship is modeled as a virtual binary symmetric channel (BSC) with a time−varying crossover probability. Besides, to handle the correlation estimation problem of Wyner−Ziv (WZ) coding, a lossy DSC setup, I design a joint bit−plane model, by which the PF based approach can be applied to tracking the correlation between non−binary sources. Through experimental results, the proposed correlation estimation approaches significantly improve the compression performance of DSC.Finally, due to the property of ultra−low encoding complexity, DSC is a promising technique for many tasks, in which the encoder has only limited computing and communication power, e.g. the space imaging systems. In this dissertation, I consider a real−world application of the proposed correlation estimation scheme on the onboard low−complexity compression of solar stereo images, since such solutions are essential to reduce onboard storage, processing, and communication resources. In this dissertation, I propose an adaptive distributed compression solution using PF that tracks the correlation, as well as performs disparity estimation, at the decoder side. The proposed algorithm istested on the stereo solar images captured by the twin satellites systemof NASA’s STEREO project. The experimental results show the significant PSNR improvement over traditional separate bit−plane decoding without dynamic correlation and disparity estimation

    Source Coding with Side Information at the Decoder and Uncertain Knowledge of the Correlation

    No full text
    International audienceThis paper considers the problem of lossless source coding with side information at the decoder, when the correlation model between the source and the side information is uncertain. Four parametrized models representing the correlation between the source and the side information are introduced. The uncertainty on the correlation appears through the lack of knowledge on the value of the parameters. For each model, we propose a practical coding scheme based on non-binary Low Density Parity Check Codes and able to deal with the parameter uncertainty. At the encoder, the choice of the coding rate results from an information theoretical analysis. Then we propose decoding algorithms that jointly estimate the source vector and the parameters. As the proposed decoder is based on the Expectation-Maximization algorithm, which is very sensitive to initialization, we also propose a method to produce first a coarse estimate of the parameters
    • …
    corecore