129 research outputs found
Limitations for nonlinear stabilization over uncertain channels
We study the problem of mean-square exponential incremental stabilization of
nonlinear systems over uncertain communication channels. We show the ability to
stabilize a system over such channels is fundamentally limited and the channel
uncertainty must provide a minimal Quality of Service (QoS) to support
stabilization. The smallest QoS necessary for stabilization is shown as a
function of the positive Lyapunov exponents of uncontrolled nonlinear systems.
The positive Lyapunov exponent is a measure of dynamical complexity and
captures the rate of exponential divergence of nearby system trajectories. One
of the main highlights of our results is the role played by nonequilibrium
dynamics to determine the limitation for incremental stabilization over
networks with uncertainty
Data Rate Limitations for Stabilization of Uncertain Systems over Lossy Channels
This paper considers data rate limitations for mean square stabilization of
uncertain discrete-time linear systems via finite data rate and lossy channels.
For a plant having parametric uncertainties, a necessary condition and a
sufficient condition are derived, represented by the data rate, the packet loss
probability, uncertainty bounds on plant parameters, and the unstable
eigenvalues of the plant. The results extend those existing in the area of
networked control, and in particular, the condition is exact for the scalar
plant case
Random-Time, State-Dependent Stochastic Drift for Markov Chains and Application to Stochastic Stabilization Over Erasure Channels
It is known that state-dependent, multi-step Lyapunov bounds lead to greatly
simplified verification theorems for stability for large classes of Markov
chain models. This is one component of the "fluid model" approach to stability
of stochastic networks. In this paper we extend the general theory to
randomized multi-step Lyapunov theory to obtain criteria for stability and
steady-state performance bounds, such as finite moments.
These results are applied to a remote stabilization problem, in which a
controller receives measurements from an erasure channel with limited capacity.
Based on the general results in the paper it is shown that stability of the
closed loop system is assured provided that the channel capacity is greater
than the logarithm of the unstable eigenvalue, plus an additional correction
term. The existence of a finite second moment in steady-state is established
under additional conditions.Comment: To appear in IEEE Transactions on Automatic Contro
Optimal Causal Rate-Constrained Sampling of the Wiener Process
We consider the following communication scenario. An encoder causally observes the Wiener process and decides when and what to transmit about it. A decoder makes real-time estimation of the process using causally received codewords. We determine the causal encoding and decoding policies that jointly minimize the mean-square estimation error, under the long-term communication rate constraint of R bits per second. We show that an optimal encoding policy can be implemented as a causal sampling policy followed by a causal compressing policy. We prove that the optimal encoding policy samples the Wiener process once the innovation passes either √(1/R) or −√(1/R), and compresses the sign of the innovation (SOI) using a 1-bit codeword. The SOI coding scheme achieves the operational distortion-rate function, which is equal to D^(op)(R)=1/(6R). Surprisingly, this is significantly better than the distortion-rate tradeoff achieved in the limit of infinite delay by the best non-causal code. This is because the SOI coding scheme leverages the free timing information supplied by the zero-delay channel between the encoder and the decoder. The key to unlock that gain is the event-triggered nature of the SOI sampling policy. In contrast, the distortion-rate tradeoffs achieved with deterministic sampling policies are much worse: we prove that the causal informational distortion-rate function in that scenario is as high as D_(DET)(R)=5/(6R). It is achieved by the uniform sampling policy with the sampling interval 1/R. In either case, the optimal strategy is to sample the process as fast as possible and to transmit 1-bit codewords to the decoder without delay
Optimal Causal Rate-Constrained Sampling of the Wiener Process
We consider the following communication scenario. An encoder causally
observes the Wiener process and decides when and what to transmit about it. A
decoder makes real-time estimation of the process using causally received
codewords. We determine the causal encoding and decoding policies that jointly
minimize the mean-square estimation error, under the long-term communication
rate constraint of bits per second. We show that an optimal encoding policy
can be implemented as a causal sampling policy followed by a causal compressing
policy. We prove that the optimal encoding policy samples the Wiener process
once the innovation passes either or
, and compresses the sign of the innovation (SOI) using a
1-bit codeword. The SOI coding scheme achieves the operational distortion-rate
function, which is equal to . Surprisingly,
this is significantly better than the distortion-rate tradeoff achieved in the
limit of infinite delay by the best non-causal code. This is because the SOI
coding scheme leverages the free timing information supplied by the zero-delay
channel between the encoder and the decoder. The key to unlock that gain is the
event-triggered nature of the SOI sampling policy. In contrast, the
distortion-rate tradeoffs achieved with deterministic sampling policies are
much worse: we prove that the causal informational distortion-rate function in
that scenario is as high as . It is
achieved by the uniform sampling policy with the sampling interval
. In either case, the optimal strategy is to sample the process as
fast as possible and to transmit 1-bit codewords to the decoder without delay
Minimum data rate for stabilization of linear systems with parametric uncertainties
We study a stabilization problem of linear uncertain systems with parametric
uncertainties via feedback control over data-rate-constrained channels. The
objective is to find the limitation on the amount of information that must be
conveyed through the channels for achieving stabilization and in particular how
the plant uncertainties affect it. We derive a necessary condition and a
sufficient condition for stabilizing the closed-loop system. These conditions
provide limitations in the form of bounds on data rate and magnitude of
uncertainty on plant parameters. The bounds are characterized by the product of
the poles of the nominal plant and are less conservative than those known in
the literature. In the course of deriving these results, a new class of
nonuniform quantizers is found to be effective in reducing the required data
rate. For scalar plants, these quantizers are shown to minimize the required
data rate, and the obtained conditions become tight
Tracking an Auto-Regressive Process with Limited Communication per Unit Time
Samples from a high-dimensional AR[1] process are observed by a sender which
can communicate only finitely many bits per unit time to a receiver. The
receiver seeks to form an estimate of the process value at every time instant
in real-time. We consider a time-slotted communication model in a slow-sampling
regime where multiple communication slots occur between two sampling instants.
We propose a successive update scheme which uses communication between sampling
instants to refine estimates of the latest sample and study the following
question: Is it better to collect communication of multiple slots to send
better refined estimates, making the receiver wait more for every refinement,
or to be fast but loose and send new information in every communication
opportunity? We show that the fast but loose successive update scheme with
ideal spherical codes is universally optimal asymptotically for a large
dimension. However, most practical quantization codes for fixed dimensions do
not meet the ideal performance required for this optimality, and they typically
will have a bias in the form of a fixed additive error. Interestingly, our
analysis shows that the fast but loose scheme is not an optimal choice in the
presence of such errors, and a judiciously chosen frequency of updates
outperforms it
- …