40 research outputs found

    Successive Refinement of Shannon Cipher System Under Maximal Leakage

    Full text link
    We study the successive refinement setting of Shannon cipher system (SCS) under the maximal leakage constraint for discrete memoryless sources under bounded distortion measures. Specifically, we generalize the threat model for the point-to-point rate-distortion setting of Issa, Wagner and Kamath (T-IT 2020) to the multiterminal successive refinement setting. Under mild conditions that correspond to partial secrecy, we characterize the asymptotically optimal normalized maximal leakage region for both the joint excess-distortion probability (JEP) and the expected distortion reliability constraints. Under JEP, in the achievability part, we propose a type-based coding scheme, analyze the reliability guarantee for JEP and bound the leakage of the information source through compressed versions. In the converse part, by analyzing a guessing scheme of the eavesdropper, we prove the optimality of our achievability result. Under expected distortion, the achievability part is established similarly to the JEP counterpart. The converse proof proceeds by generalizing the corresponding results for the rate-distortion setting of SCS by Schieler and Cuff (T-IT 2014) to the successive refinement setting. Somewhat surprisingly, the normalized maximal leakage regions under both JEP and expected distortion constraints are identical under certain conditions, although JEP appears to be a stronger reliability constraint

    Tracking an Auto-Regressive Process with Limited Communication per Unit Time

    Full text link
    Samples from a high-dimensional AR[1] process are observed by a sender which can communicate only finitely many bits per unit time to a receiver. The receiver seeks to form an estimate of the process value at every time instant in real-time. We consider a time-slotted communication model in a slow-sampling regime where multiple communication slots occur between two sampling instants. We propose a successive update scheme which uses communication between sampling instants to refine estimates of the latest sample and study the following question: Is it better to collect communication of multiple slots to send better refined estimates, making the receiver wait more for every refinement, or to be fast but loose and send new information in every communication opportunity? We show that the fast but loose successive update scheme with ideal spherical codes is universally optimal asymptotically for a large dimension. However, most practical quantization codes for fixed dimensions do not meet the ideal performance required for this optimality, and they typically will have a bias in the form of a fixed additive error. Interestingly, our analysis shows that the fast but loose scheme is not an optimal choice in the presence of such errors, and a judiciously chosen frequency of updates outperforms it

    Adaptive data acquisition for communication networks

    Get PDF
    In an increasing number of communication systems, such as sensor networks or local area networks within medical, financial or military institutions, nodes communicate information sources (e.g., video, audio) over multiple hops. Moreover, nodes have, or can acquire, correlated information sources from the environment, e.g., from data bases or from measurements. Among the new design problems raised by the outlined scenarios, two key issues are addressed in this dissertation: 1) How to preserve the consistency of sensitive information across multiple hops; 2) How to incorporate the design of actuation in the form of data acquisition and network probing in the optimization of the communication network. These aspects are investigated by using information-theoretic (source and channel coding) models, obtaining fundamental insights that have been corroborated by various illustrative examples. To address point 1), the problem of cascade source coding with side information is investigated. The motivating observation is that, in this class of problems, the estimate of the source obtained at the decoder cannot be generally reproduced at the encoder if it depends directly on the side information. In some applications, such as the one mentioned above, this lack of consistency may be undesirable, and a so called Common Reconstruction (CR) requirement, whereby one imposes that the encoder be able to agree on the decoder’s estimate, may be instead in order. The rate-distortion region is here derived for some special cases of the cascade source coding problem and of the related Heegard-Berger (HB) problem under the CR constraint. As for point 2), the work is motivated by the fact that, in order to enable, or to facilitate, the exchange of information, nodes of a communication network routinely take various types of actions, such as data acquisition or network probing. For instance, sensor nodes schedule the operation of their sensing devices to measure given physical quantities of interest, and wireless nodes probe the state of the channel via training. The problem of optimal data acquisition is studied for a cascade source coding problem, a distributed source coding problem and a two-way source coding problem assuming that the side information sequences can be controlled via the selection of cost-constrained actions. It is shown that a joint design of the description of the source and of the control signals used to guide the selection of the actions at downstream nodes is generally necessary for an efficient use of the available communication links. Instead, the problem of optimal channel probing is studied for a broadcast channel and a point-to-point link in which the decoder is interested in estimating not only the message, but also the state sequence. Finally, the problem of embedding information on the actions is studied for both the source and the channel coding set-ups described above

    Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design

    Get PDF
    Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction)

    Interference Channels with Destination Cooperation

    Full text link
    Interference is a fundamental feature of the wireless channel. To better understand the role of cooperation in interference management, the two-user Gaussian interference channel where the destination nodes can cooperate by virtue of being able to both transmit and receive is studied. The sum-capacity of this channel is characterized up to a constant number of bits. The coding scheme employed builds up on the superposition scheme of Han and Kobayashi (1981) for two-user interference channels without cooperation. New upperbounds to the sum-capacity are also derived.Comment: revised based on reviewers' comment

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/
    corecore