6,649 research outputs found
Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems
Stabilization of non-stationary linear systems over noisy communication
channels is considered. Stochastically stable sources, and unstable but
noise-free or bounded-noise systems have been extensively studied in
information theory and control theory literature since 1970s, with a renewed
interest in the past decade. There have also been studies on non-causal and
causal coding of unstable/non-stationary linear Gaussian sources. In this
paper, tight necessary and sufficient conditions for stochastic stabilizability
of unstable (non-stationary) possibly multi-dimensional linear systems driven
by Gaussian noise over discrete channels (possibly with memory and feedback)
are presented. Stochastic stability notions include recurrence, asymptotic mean
stationarity and sample path ergodicity, and the existence of finite second
moments. Our constructive proof uses random-time state-dependent stochastic
drift criteria for stabilization of Markov chains. For asymptotic mean
stationarity (and thus sample path ergodicity), it is sufficient that the
capacity of a channel is (strictly) greater than the sum of the logarithms of
the unstable pole magnitudes for memoryless channels and a class of channels
with memory. This condition is also necessary under a mild technical condition.
Sufficient conditions for the existence of finite average second moments for
such systems driven by unbounded noise are provided.Comment: To appear in IEEE Transactions on Information Theor
Stabilization of Linear Systems Over Gaussian Networks
The problem of remotely stabilizing a noisy linear time invariant plant over
a Gaussian relay network is addressed. The network is comprised of a sensor
node, a group of relay nodes and a remote controller. The sensor and the relay
nodes operate subject to an average transmit power constraint and they can
cooperate to communicate the observations of the plant's state to the remote
controller. The communication links between all nodes are modeled as Gaussian
channels. Necessary as well as sufficient conditions for mean-square
stabilization over various network topologies are derived. The sufficient
conditions are in general obtained using delay-free linear policies and the
necessary conditions are obtained using information theoretic tools. Different
settings where linear policies are optimal, asymptotically optimal (in certain
parameters of the system) and suboptimal have been identified. For the case
with noisy multi-dimensional sources controlled over scalar channels, it is
shown that linear time varying policies lead to minimum capacity requirements,
meeting the fundamental lower bound. For the case with noiseless sources and
parallel channels, non-linear policies which meet the lower bound have been
identified
A Nonstochastic Information Theory for Communication and State Estimation
In communications, unknown variables are usually modelled as random
variables, and concepts such as independence, entropy and information are
defined in terms of the underlying probability distributions. In contrast,
control theory often treats uncertainties and disturbances as bounded unknowns
having no statistical structure. The area of networked control combines both
fields, raising the question of whether it is possible to construct meaningful
analogues of stochastic concepts such as independence, Markovness, entropy and
information without assuming a probability space. This paper introduces a
framework for doing so, leading to the construction of a maximin information
functional for nonstochastic variables. It is shown that the largest maximin
information rate through a memoryless, error-prone channel in this framework
coincides with the block-coding zero-error capacity of the channel. Maximin
information is then used to derive tight conditions for uniformly estimating
the state of a linear time-invariant system over such a channel, paralleling
recent results of Matveev and Savkin
Stochastic Stability Analysis of Discrete Time System Using Lyapunov Measure
In this paper, we study the stability problem of a stochastic, nonlinear,
discrete-time system. We introduce a linear transfer operator-based Lyapunov
measure as a new tool for stability verification of stochastic systems. Weaker
set-theoretic notion of almost everywhere stochastic stability is introduced
and verified, using Lyapunov measure-based stochastic stability theorems.
Furthermore, connection between Lyapunov functions, a popular tool for
stochastic stability verification, and Lyapunov measures is established. Using
the duality property between the linear transfer Perron-Frobenius and Koopman
operators, we show the Lyapunov measure and Lyapunov function used for the
verification of stochastic stability are dual to each other. Set-oriented
numerical methods are proposed for the finite dimensional approximation of the
Perron-Frobenius operator; hence, Lyapunov measure is proposed. Stability
results in finite dimensional approximation space are also presented. Finite
dimensional approximation is shown to introduce further weaker notion of
stability referred to as coarse stochastic stability. The results in this paper
extend our earlier work on the use of Lyapunov measures for almost everywhere
stability verification of deterministic dynamical systems ("Lyapunov Measure
for Almost Everywhere Stability", {\it IEEE Trans. on Automatic Control}, Vol.
53, No. 1, Feb. 2008).Comment: Proceedings of American Control Conference, Chicago IL, 201
Time-triggering versus event-triggering control over communication channels
Time-triggered and event-triggered control strategies for stabilization of an
unstable plant over a rate-limited communication channel subject to unknown,
bounded delay are studied and compared. Event triggering carries implicit
information, revealing the state of the plant. However, the delay in the
communication channel causes information loss, as it makes the state
information out of date. There is a critical delay value, when the loss of
information due to the communication delay perfectly compensates the implicit
information carried by the triggering events. This occurs when the maximum
delay equals the inverse of the entropy rate of the plant. In this context,
extensions of our previous results for event triggering strategies are
presented for vector systems and are compared with the data-rate theorem for
time-triggered control, that is extended here to a setting with unknown delay.Comment: To appear in the 56th IEEE Conference on Decision and Control (CDC),
Melbourne, Australia. arXiv admin note: text overlap with arXiv:1609.0959
On the effect of quantization on performance at high rates
We study the effect of quantization on the performance of a scalar dynamical system in the high rate regime. We evaluate the LQ cost for two commonly used quantizers: uniform and logarithmic and provide a lower bound on performance of any centroid-based quantizer based on entropy arguments. We also consider the case when the channel drops data packets stochastically
- …