51 research outputs found

    Coding for Relay Networks with Parallel Gaussian Channels

    Get PDF
    A wireless relay network consists of multiple source nodes, multiple destination nodes, and possibly many relay nodes in between to facilitate its transmission. It is clear that the performance of such networks highly depends on information for- warding strategies adopted at the relay nodes. This dissertation studies a particular information forwarding strategy called compute-and-forward. Compute-and-forward is a novel paradigm that tries to incorporate the idea of network coding within the physical layer and hence is often referred to as physical layer network coding. The main idea is to exploit the superposition nature of the wireless medium to directly compute or decode functions of transmitted signals at intermediate relays in a net- work. Thus, the coding performed at the physical layer serves the purpose of error correction as well as permits recovery of functions of transmitted signals. For the bidirectional relaying problem with Gaussian channels, it has been shown by Wilson et al. and Nam et al. that the compute-and-forward paradigm is asymptotically optimal and achieves the capacity region to within 1 bit; however, similar results beyond the memoryless case are still lacking. This is mainly because channels with memory would destroy the lattice structure that is most crucial for the compute-and-forward paradigm. Hence, how to extend compute-and-forward to such channels has been a challenging issue. This motivates this study of the extension of compute-and-forward to channels with memory, such as inter-symbol interference. The bidirectional relaying problem with parallel Gaussian channels is also studied, which is a relevant model for the Gaussian bidirectional channel with inter-symbol interference and that with multiple-input multiple-output channels. Motivated by the recent success of linear finite-field deterministic model, we first investigate the corresponding deterministic parallel bidirectional relay channel and fully characterize its capacity region. Two compute-and-forward schemes are then proposed for the Gaussian model and the capacity region is approximately characterized to within a constant gap. The design of coding schemes for the compute-and-forward paradigm with low decoding complexity is then considered. Based on the separation-based framework proposed previously by Tunali et al., this study proposes a family of constellations that are suitable for the compute-and-forward paradigm. Moreover, by using Chinese remainder theorem, it is shown that the proposed constellations are isomorphic to product fields and therefore can be put into a multilevel coding framework. This study then proposes multilevel coding for the proposed constellations and uses multistage decoding to further reduce decoding complexity

    Source-Channel Diversity for Parallel Channels

    Full text link
    We consider transmitting a source across a pair of independent, non-ergodic channels with random states (e.g., slow fading channels) so as to minimize the average distortion. The general problem is unsolved. Hence, we focus on comparing two commonly used source and channel encoding systems which correspond to exploiting diversity either at the physical layer through parallel channel coding or at the application layer through multiple description source coding. For on-off channel models, source coding diversity offers better performance. For channels with a continuous range of reception quality, we show the reverse is true. Specifically, we introduce a new figure of merit called the distortion exponent which measures how fast the average distortion decays with SNR. For continuous-state models such as additive white Gaussian noise channels with multiplicative Rayleigh fading, optimal channel coding diversity at the physical layer is more efficient than source coding diversity at the application layer in that the former achieves a better distortion exponent. Finally, we consider a third decoding architecture: multiple description encoding with a joint source-channel decoding. We show that this architecture achieves the same distortion exponent as systems with optimal channel coding diversity for continuous-state channels, and maintains the the advantages of multiple description systems for on-off channels. Thus, the multiple description system with joint decoding achieves the best performance, from among the three architectures considered, on both continuous-state and on-off channels.Comment: 48 pages, 14 figure

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    A Deterministic Annealing Framework for Global Optimization of Delay-Constrained Communication and Control Strategies

    Get PDF
    This dissertation is concerned with the problem of global optimization of delay constrained communication and control strategies. Specifically, the objective is to obtain optimal encoder and decoder functions that map between the source space and the channel space, to minimize a given cost functional. The cost surfaces associated with these problems are highly complex and riddled with local minima, rendering gradient descent based methods ineffective. This thesis proposes and develops a powerful non-convex optimization method based on the concept of deterministic annealing (DA) - which is derived from information theoretic principles with analogies to statistical physics, and was successfully employed in several problems including vector quantization, classification and regression. DA has several useful properties including reduced sensitivity to initialization and strong potential to avoid poor local minima. DA-based optimization methods are developed here for the following fundamental communication problems: the Wyner-Ziv setting where only a decoder has access to side information, the distributed setting where independent encoders transmit over independent channels to a central decoder, and analog multiple descriptions setting which is an extension of the well known source coding problem of multiple descriptions. Comparative numerical results are presented, which show strict superiority of the proposed method over gradient descent based optimization methods as well as prior approaches in literature. Detailed analysis of the highly non-trivial structure of obtained mappings is provided. The thesis further studies the related problem of global optimization of controller mappings in decentralized stochastic control problems, including Witsenhausen's celebrated 1968 counter-example. It is well-known that most decentralized control problems do not admit closed-form solutions and require numerical optimization. An optimization method is developed, based on DA, for a class of decentralized stochastic control problems. Comparative numerical results are presented for two test problems that show strict superiority of the proposed method over prior approaches in literature, and analyze the structure of obtained controller functions

    A FEASIBILITY STUDY ON C-RAN

    Get PDF
    Now a days the number of users of mobile phone are increasing exponentially, so it will cause jamming in the network  and require  large bandwidth So among promising technology candidates to overcome this problem, cloud radio access network (C-RAN) used.C-RAN, having one baseband unit (BBU) communicates with users through distributed Remote Radio Heads (RRHs) .RRH  are connected to the BBU via high capability, low latency fronthaul links and performs soft relaying. The architecture of C-RAN imposes a shortage of fronthaul bandwidth because raw I/Q  samples are exchanged between the RRHs and the BBU.In BBU different algorithms are used to improve the capacity,joint decompression and decoding(JDD) and wynerziv coding

    ADAPTIVE AND SECURE DISTRIBUTED SOURCE CODING FOR VIDEO AND IMAGE COMPRESSION

    Get PDF
    Distributed Video Coding (DVC) is rapidly gaining popularity as a low cost, robust video coding solution, that reduces video encoding complexity. DVC is built on Distributed Source Coding (DSC) principles where correlation between sources to be compressed is exploited at the decoder side. In the case of DVC, a current frame available only at the encoder is estimated at the decoder with side information (SI) generated from other frames available at the decoder. The inter-frame correlation in DVC is then explored at the decoder based on the received syndromes of Wyner-Ziv (WZ) frame and SI frame. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations.Generally, the existing correlation estimation methods in DVC can be classified into two main types: online estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms online estimation techniques with the cost of increased decoding complexity.In order to exploit the robustness of DVC code designs, I integrate particle filtering with standard belief propagation decoding for inference on one joint factor graph to estimate correlation among source and side information. Correlation estimation is performed OTF as it is carried out jointly with decoding of the graph-based DSC code. Moreover, I demonstrate our proposed scheme within state-of-the-art DVC systems, which are transform-domain based with a feedback channel for rate adaptation. Experimental results show that our proposed system gives a significant performance improvement compared to the benchmark state-of-the-art DISCOVER codec (including correlation estimation) and the case without dynamic particle filtering tracking, due to improved knowledge of timely correlation statistics via the combination of joint bit-plane decoding and particle-based BP tracking.Although sampling (e.g., particle filtering) based OTF correlation advances performances of DVC, it also introduces significant computational overhead and results in the decoding delay of DVC. Therefore, I tackle this difficulty through a low complexity adaptive DVC scheme using the deterministic approximate inference, where correlation estimation is also performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code but with much lower complexity. The proposed adaptive DVC scheme is based on expectation propagation (EP), which generally offers better tradeoff between accuracy and complexity among different deterministic approximate inference methods. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.Finally, I extend the concept of DVC (i.e., exploring inter-frames correlation at the decoder side) to the compression of biomedical imaging data (e.g., CT sequence) in a lossless setup, where each slide of a CT sequence is analogous to a frame of video sequence. Besides compression efficiency, another important concern of biomedical imaging data is the privacy and security. Ideally, biomedical data should be kept in a secure manner (i.e. encrypted).An intuitive way is to compress the encrypted biomedical data directly. Unfortunately, traditional compression algorithms (removing redundancy through exploiting the structure of data) fail to handle encrypted data. The reason is that encrypted data appear to be random and lack the structure in the original data. The "best" practice has been compressing the data before encryption, however, this is not appropriate for privacy related scenarios (e.g., biomedical application), where one wants to process data while keeping them encrypted and safe. In this dissertation, I develop a Secure Privacy-presERving Medical Image CompRessiOn (SUPERMICRO) framework based on DSC, which makes the compression of the encrypted data possible without compromising security and compression efficiency. Our approach guarantees the data transmission and storage in a privacy-preserving manner. I tested our proposed framework on two CT image sequences and compared it with the state-of-the-art JPEG 2000 lossless compression. Experimental results demonstrated that the SUPERMICRO framework provides enhanced security and privacy protection, as well as high compression performance
    • …
    corecore