9 research outputs found
Multi-rate control over AWGN channels via analog joint source-channel coding
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such âseparated source and channel codingâ can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition
Multi-rate control over AWGN channels via analog joint source-channel coding
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such âseparated source and channel codingâ can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition
Directed Data-Processing Inequalities for Systems with Feedback
We present novel data-processing inequalities relating the mutual information
and the directed information in systems with feedback. The internal blocks
within such systems are restricted only to be causal mappings, but are allowed
to be non-linear, stochastic and time varying. These blocks can for example
represent source encoders, decoders or even communication channels. Moreover,
the involved signals can be arbitrarily distributed. Our first main result
relates mutual and directed informations and can be interpreted as a law of
conservation of information flow. Our second main result is a pair of
data-processing inequalities (one the conditional version of the other) between
nested pairs of random sequences entirely within the closed loop. Our third
main result is introducing and characterizing the notion of in-the-loop (ITL)
transmission rate for channel coding scenarios in which the messages are
internal to the loop. Interestingly, in this case the conventional notions of
transmission rate associated with the entropy of the messages and of channel
capacity based on maximizing the mutual information between the messages and
the output turn out to be inadequate. Instead, as we show, the ITL transmission
rate is the unique notion of rate for which a channel code attains zero error
probability if and only if such ITL rate does not exceed the corresponding
directed information rate from messages to decoded messages. We apply our
data-processing inequalities to show that the supremum of achievable (in the
usual channel coding sense) ITL transmission rates is upper bounded by the
supremum of the directed information rate across the communication channel.
Moreover, we present an example in which this upper bound is attained. Finally,
...Comment: Submitted to Entropy. arXiv admin note: substantial text overlap with
arXiv:1301.642
Multi-rate control over AWGN channels via analog joint source-channel coding
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such âseparated source and channel codingâ can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition
Recommended from our members
Belief Refinement Approaches to Communication and Inference Problems
This dissertation considers a problem where a single agent or a group of agents aim to estimate/learn unknown (possibly time-varying) parameters of interest despite making noisy observations. The agents take a Bayesian-like approach by maintaining a posterior probability distribution or âbelief" over a parameter space conditioned on past observations. The agents aim to iteratively refine their belief over the parameter space as new information is acquired from their private observations or through collaboration with other agents. In particular, the agents aim to ensure that sufficient belief is assigned in neighborhoods centered around the true parameter with high probability or âreliability". In the context of communication problems considered in this dissertation, the agents may be active, i.e., agents may additionally take actions which provide new observations. Furthermore, agents may employ an adaptive strategy, i.e., using their past actions and the resulting observations, agents can adaptively choose actions to control the concentration of the belief. When the agents are active, we propose and analyze adaptive belief refinement approaches to obtain belief concentration on the unknown parameter with high reliability. In a different context, namely that of decentralized inference, we consider passive agents. Here, agents face an additional challenge due to the statistical insufficiency of their private observations to learn the unknown parameter. While individual agentsâ observations are not informative enough, we assume that the agentsâ observations are collectively informative to learn the unknown parameter. Here, we propose and analyze decentralized belief refining strategies to collaboratively obtain belief concentration on the unknown parameter. In the first part of this dissertation, we consider active strategies that are extensions of the posterior matching strategy (PM) introduced by Horstein, which is a generalization of the well-known binary search algorithm. We propose and analyze PM based strategies in the context of modern communication systems, namely the problem of establishing initial access in mm-Wave communication and spectrum sensing for Cognitive Radio. We propose and analyze channel coding strategies for real-time streaming and control applications. The second part of the dissertation investigates the belief refinement approaches for decentralized learning. In particular, it focusing on developing and analyzing a decentralized learning rule for statistical hypothesis testing and its application to decentralized machine learning
Causal Sampling, Compressing, and Channel Coding of Streaming Data
With the emergence of the Internet of Things, communication systems, such as those employed in distributed control and tracking scenarios, are becoming increasingly dynamic, interactive, and delay-sensitive. The data in such real-time systems arrive at the encoder progressively in a streaming fashion. An intriguing question is: what codes can transmit streaming data with both high reliability and low latency? Classical non-causal (block) encoding schemes can transmit data reliably but under the assumption that the encoder knows the entire data block before the transmission. While this is a realistic assumption in delay-tolerant systems, it is ill-suited to real-time systems due to the delay introduced by collecting data into a block. This thesis studies causal encoding: the encoder transmits information based on the causally received data while the data is still streaming in and immediately incorporates the newly received data into a continuing transmission on the fly.
This thesis investigates causal encoding of streaming data in three scenarios: causal sampling, causal lossy compressing, and causal joint source-channel coding (JSCC). In the causal sampling scenario, a sampler observes a continuous-time source process and causally decides when to transmit real-valued samples of it under a constraint on the average number of samples per second; an estimator uses the causally received samples to approximate the source process in real time. We propose a causal sampling policy that achieves the best tradeoff between the sampling frequency and the end-to-end real-time estimation distortion for a class of continuous Markov processes. In the causal lossy compressing scenario, the sampling frequency constraint in the causal sampling scenario is replaced by a rate constraint on the average number of bits per second. We propose a causal code that achieves the best causal distortion-rate tradeoff for the same class of processes. In the causal JSCC scenario, the noiseless channel and the continuous-time process in the previous scenarios are replaced by a discrete memoryless channel with feedback and a sequence of streaming symbols, respectively. We propose a causal joint sourcechannel code that achieves the maximum exponentially decaying rate of the error probability compatible with a given rate. Remarkably, the fundamental limits in the causal lossy compressing and the causal JSCC scenarios achieved by our causal codes are no worse than those achieved by the best non-causal codes. In addition to deriving the fundamental limits and presenting the causal codes that achieve the limits, we also show that our codes apply to control systems, are resilient to system deficiencies such as channel delay and noise, and have low complexities.</p
(Almost) practical tree codes
We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting