17,650 research outputs found
Controllability and Stabilization of Kolmogorov Forward Equations for Robotic Swarms
abstract: Numerous works have addressed the control of multi-robot systems for coverage, mapping, navigation, and task allocation problems. In addition to classical microscopic approaches to multi-robot problems, which model the actions and decisions of individual robots, lately, there has been a focus on macroscopic or Eulerian approaches. In these approaches, the population of robots is represented as a continuum that evolves according to a mean-field model, which is directly designed such that the corresponding robot control policies produce target collective behaviours.
This dissertation presents a control-theoretic analysis of three types of mean-field models proposed in the literature for modelling and control of large-scale multi-agent systems, including robotic swarms. These mean-field models are Kolmogorov forward equations of stochastic processes, and their analysis is motivated by the fact that as the number of agents tends to infinity, the empirical measure associated with the agents converges to the solution of these models. Hence, the problem of transporting a swarm of agents from one distribution to another can be posed as a control problem for the forward equation of the process that determines the time evolution of the swarm density.
First, this thesis considers the case in which the agents' states evolve on a finite state space according to a continuous-time Markov chain (CTMC), and the forward equation is an ordinary differential equation (ODE). Defining the agents' task transition rates as the control parameters, the finite-time controllability, asymptotic controllability, and stabilization of the forward equation are investigated. Second, the controllability and stabilization problem for systems of advection-diffusion-reaction partial differential equations (PDEs) is studied in the case where the control parameters include the agents' velocity as well as transition rates. Third, this thesis considers a controllability and optimal control problem for the forward equation in the more general case where the agent dynamics are given by a nonlinear discrete-time control system. Beyond these theoretical results, this thesis also considers numerical optimal transport for control-affine systems. It is shown that finite-volume approximations of the associated PDEs lead to well-posed transport problems on graphs as long as the control system is controllable everywhere.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201
Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems
Stabilization of non-stationary linear systems over noisy communication
channels is considered. Stochastically stable sources, and unstable but
noise-free or bounded-noise systems have been extensively studied in
information theory and control theory literature since 1970s, with a renewed
interest in the past decade. There have also been studies on non-causal and
causal coding of unstable/non-stationary linear Gaussian sources. In this
paper, tight necessary and sufficient conditions for stochastic stabilizability
of unstable (non-stationary) possibly multi-dimensional linear systems driven
by Gaussian noise over discrete channels (possibly with memory and feedback)
are presented. Stochastic stability notions include recurrence, asymptotic mean
stationarity and sample path ergodicity, and the existence of finite second
moments. Our constructive proof uses random-time state-dependent stochastic
drift criteria for stabilization of Markov chains. For asymptotic mean
stationarity (and thus sample path ergodicity), it is sufficient that the
capacity of a channel is (strictly) greater than the sum of the logarithms of
the unstable pole magnitudes for memoryless channels and a class of channels
with memory. This condition is also necessary under a mild technical condition.
Sufficient conditions for the existence of finite average second moments for
such systems driven by unbounded noise are provided.Comment: To appear in IEEE Transactions on Information Theor
Linear feedback stabilization of a dispersively monitored qubit
The state of a continuously monitored qubit evolves stochastically,
exhibiting competition between coherent Hamiltonian dynamics and diffusive
partial collapse dynamics that follow the measurement record. We couple these
distinct types of dynamics together by linearly feeding the collected record
for dispersive energy measurements directly back into a coherent Rabi drive
amplitude. Such feedback turns the competition cooperative, and effectively
stabilizes the qubit state near a target state. We derive the conditions for
obtaining such dispersive state stabilization and verify the stabilization
conditions numerically. We include common experimental nonidealities, such as
energy decay, environmental dephasing, detector efficiency, and feedback delay,
and show that the feedback delay has the most significant negative effect on
the feedback protocol. Setting the measurement collapse timescale to be long
compared to the feedback delay yields the best stabilization.Comment: 16 pages, 7 figure
New advances in H∞ control and filtering for nonlinear systems
The main objective of this special issue is to
summarise recent advances in H∞ control and filtering
for nonlinear systems, including time-delay, hybrid and
stochastic systems. The published papers provide new
ideas and approaches, clearly indicating the advances
made in problem statements, methodologies or applications
with respect to the existing results. The special
issue also includes papers focusing on advanced and
non-traditional methods and presenting considerable
novelties in theoretical background or experimental
setup. Some papers present applications to newly
emerging fields, such as network-based control and
estimation
- …