822 research outputs found
Error Correcting Codes for Distributed Control
The problem of stabilizing an unstable plant over a noisy communication link
is an increasingly important one that arises in applications of networked
control systems. Although the work of Schulman and Sahai over the past two
decades, and their development of the notions of "tree codes"\phantom{} and
"anytime capacity", provides the theoretical framework for studying such
problems, there has been scant practical progress in this area because explicit
constructions of tree codes with efficient encoding and decoding did not exist.
To stabilize an unstable plant driven by bounded noise over a noisy channel one
needs real-time encoding and real-time decoding and a reliability which
increases exponentially with decoding delay, which is what tree codes
guarantee. We prove that linear tree codes occur with high probability and, for
erasure channels, give an explicit construction with an expected decoding
complexity that is constant per time instant. We give novel sufficient
conditions on the rate and reliability required of the tree codes to stabilize
vector plants and argue that they are asymptotically tight. This work takes an
important step towards controlling plants over noisy channels, and we
demonstrate the efficacy of the method through several examples.Comment: 39 page
Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels
We study the problem of achieving average consensus between a group of agents
over a network with erasure links. In the context of consensus problems, the
unreliability of communication links between nodes has been traditionally
modeled by allowing the underlying graph to vary with time. In other words,
depending on the realization of the link erasures, the underlying graph at each
time instant is assumed to be a subgraph of the original graph. Implicit in
this model is the assumption that the erasures are symmetric: if at time t the
packet from node i to node j is dropped, the same is true for the packet
transmitted from node j to node i. However, in practical wireless communication
systems this assumption is unreasonable and, due to the lack of symmetry,
standard averaging protocols cannot guarantee that the network will reach
consensus to the true average. In this paper we explore the use of channel
coding to improve the performance of consensus algorithms. For symmetric
erasures, we show that, for certain ranges of the system parameters, repetition
codes can speed up the convergence rate. For asymmetric erasures we show that
tree codes (which have recently been designed for erasure channels) can be used
to simulate the performance of the original "unerased" graph. Thus, unlike
conventional consensus methods, we can guarantee convergence to the average in
the asymmetric case. The price is a slowdown in the convergence rate, relative
to the unerased network, which is still often faster than the convergence rate
of conventional consensus algorithms over noisy links
A Nonstochastic Information Theory for Communication and State Estimation
In communications, unknown variables are usually modelled as random
variables, and concepts such as independence, entropy and information are
defined in terms of the underlying probability distributions. In contrast,
control theory often treats uncertainties and disturbances as bounded unknowns
having no statistical structure. The area of networked control combines both
fields, raising the question of whether it is possible to construct meaningful
analogues of stochastic concepts such as independence, Markovness, entropy and
information without assuming a probability space. This paper introduces a
framework for doing so, leading to the construction of a maximin information
functional for nonstochastic variables. It is shown that the largest maximin
information rate through a memoryless, error-prone channel in this framework
coincides with the block-coding zero-error capacity of the channel. Maximin
information is then used to derive tight conditions for uniformly estimating
the state of a linear time-invariant system over such a channel, paralleling
recent results of Matveev and Savkin
Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems
Stabilization of non-stationary linear systems over noisy communication
channels is considered. Stochastically stable sources, and unstable but
noise-free or bounded-noise systems have been extensively studied in
information theory and control theory literature since 1970s, with a renewed
interest in the past decade. There have also been studies on non-causal and
causal coding of unstable/non-stationary linear Gaussian sources. In this
paper, tight necessary and sufficient conditions for stochastic stabilizability
of unstable (non-stationary) possibly multi-dimensional linear systems driven
by Gaussian noise over discrete channels (possibly with memory and feedback)
are presented. Stochastic stability notions include recurrence, asymptotic mean
stationarity and sample path ergodicity, and the existence of finite second
moments. Our constructive proof uses random-time state-dependent stochastic
drift criteria for stabilization of Markov chains. For asymptotic mean
stationarity (and thus sample path ergodicity), it is sufficient that the
capacity of a channel is (strictly) greater than the sum of the logarithms of
the unstable pole magnitudes for memoryless channels and a class of channels
with memory. This condition is also necessary under a mild technical condition.
Sufficient conditions for the existence of finite average second moments for
such systems driven by unbounded noise are provided.Comment: To appear in IEEE Transactions on Information Theor
Coded Kalman Filtering Over Gaussian Channels with Feedback
This paper investigates the problem of zero-delay joint source-channel coding
of a vector Gauss-Markov source over a multiple-input multiple-output (MIMO)
additive white Gaussian noise (AWGN) channel with feedback. In contrast to the
classical problem of causal estimation using noisy observations, we examine a
system where the source can be encoded before transmission. An encoder,
equipped with feedback of past channel outputs, observes the source state and
encodes the information in a causal manner as inputs to the channel while
adhering to a power constraint. The objective of the code is to estimate the
source state with minimum mean square error at the infinite horizon. This work
shows a fundamental theorem for two scenarios: for the transmission of an
unstable vector Gauss-Markov source over either a multiple-input single-output
(MISO) or a single-input multiple-output (SIMO) AWGN channel, finite estimation
error is achievable if and only if the sum of logs of the unstable eigenvalues
of the state gain matrix is less than the Shannon channel capacity. We prove
these results by showing an optimal linear innovations encoder that can be
applied to sources and channels of any dimension and analyzing it together with
the corresponding Kalman filter decoder.Comment: Presented at 59th Allerton Conference on Communication, Control, and
Computin
- …