13,080 research outputs found
Optimal Steering of a Linear Stochastic System to a Final Probability Distribution, Part II
We address the problem of steering the state of a linear stochastic system to a prescribed distribution over a finite horizon with minimum energy, and the problem to maintain the state at a stationary distribution over an infinite horizon with minimum power. For both problems the control and Gaussian noise channels are allowed to be distinct, thereby, placing the results of this paper outside of the scope of previous work both in probability and in control.
We present sufficient conditions for optimality in terms of a system of dynamically coupled Riccati equations in the finite horizon case and in terms of algebraic conditions for the stationary case.
We then address the question of feasibility for both problems. For the finite-horizon case, provided the system is controllable, we prove that without any restriction on the directionality of the stochastic disturbance it is always possible to steer the state to any arbitrary Gaussian distribution over any specified finite time-interval.
For the stationary infinite horizon case, it is not always possible to maintain the state at an arbitrary Gaussian distribution through constant state-feedback. It is shown that covariances of admissible stationary Gaussian distributions are characterized by a certain Lyapunov-like equation and, in fact, they coincide with the class of stationary state covariances that can be attained by a suitable stationary colored noise as input.
We finally address the question of how to compute suitable controls numerically.
We present an alternative to solving the system of coupled Riccati equations,
by expressing the optimal controls in the form of solutions to (convex) semi-definite programs for both cases.
We conclude with an example to steer the state covariance of the distribution of inertial particles to an admissible stationary Gaussian distribution over a finite interval, to be maintained at that stationary distribution thereafter by constant-gain state-feedback control
Steering the distribution of agents in mean-field and cooperative games
The purpose of this work is to pose and solve the problem to guide a
collection of weakly interacting dynamical systems (agents, particles, etc.) to
a specified terminal distribution. The framework is that of mean-field and of
cooperative games. A terminal cost is used to accomplish the task; we establish
that the map between terminal costs and terminal probability distributions is
onto. Our approach relies on and extends the theory of optimal mass transport
and its generalizations.Comment: 20 pages, 8 figure
Optimal transport over a linear dynamical system
We consider the problem of steering an initial probability density for the state vector of a linear system
to a final one, in finite time, using minimum energy control. In the case where the dynamics correspond to an integrator () this amounts to a Monge-Kantorovich Optimal Mass Transport (OMT) problem. In general, we show that the problem can again be reduced to solving an OMT problem and that it has a unique solution. In parallel, we study the optimal steering of the state-density of a linear stochastic system with white noise disturbance; this is known to correspond to a Schroedinger bridge. As the white noise intensity tends to zero, the flow of densities converges to that of the deterministic dynamics and can serve as a way to compute the solution of its deterministic counterpart. The solution can be expressed in closed-form for Gaussian initial and final state densities in both cases
Optimal control of the state statistics for a linear stochastic system
We consider a variant of the classical linear quadratic Gaussian regulator
(LQG) in which penalties on the endpoint state are replaced by the
specification of the terminal state distribution. The resulting theory
considerably differs from LQG as well as from formulations that bound the
probability of violating state constraints. We develop results for optimal
state-feedback control in the two cases where i) steering of the state
distribution is to take place over a finite window of time with minimum energy,
and ii) the goal is to maintain the state at a stationary distribution over an
infinite horizon with minimum power. For both problems the distribution of
noise and state are Gaussian. In the first case, we show that provided the
system is controllable, the state can be steered to any terminal Gaussian
distribution over any specified finite time-interval. In the second case, we
characterize explicitly the covariance of admissible stationary state
distributions that can be maintained with constant state-feedback control. The
conditions for optimality are expressed in terms of a system of dynamically
coupled Riccati equations in the finite horizon case and in terms of algebraic
conditions for the stationary case. In the case where the noise and control
share identical input channels, the Riccati equations for finite-horizon
steering become homogeneous and can be solved in closed form. The present paper
is largely based on our recent work in arxiv.org/abs/1408.2222,
arxiv.org/abs/1410.3447 and presents an overview of certain key results.Comment: 7 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:1410.344
Steering state statistics with output feedback
Consider a linear stochastic system whose initial state is a random vector
with a specified Gaussian distribution. Such a distribution may represent a
collection of particles abiding by the specified system dynamics. In recent
publications, we have shown that, provided the system is controllable, it is
always possible to steer the state covariance to any specified terminal
Gaussian distribution using state feedback. The purpose of the present work is
to show that, in the case where only partial state observation is available, a
necessary and sufficient condition for being able to steer the system to a
specified terminal Gaussian distribution for the state vector is that the
terminal state covariance be greater (in the positive-definite sense) than the
error covariance of a corresponding Kalman filter.Comment: 10 pages, 2 figure
- …