1,773 research outputs found
Sensor Scheduling for Optimal Observability Using Estimation Entropy
We consider sensor scheduling as the optimal observability problem for
partially observable Markov decision processes (POMDP). This model fits to the
cases where a Markov process is observed by a single sensor which needs to be
dynamically adjusted or by a set of sensors which are selected one at a time in
a way that maximizes the information acquisition from the process. Similar to
conventional POMDP problems, in this model the control action is based on all
past measurements; however here this action is not for the control of state
process, which is autonomous, but it is for influencing the measurement of that
process. This POMDP is a controlled version of the hidden Markov process, and
we show that its optimal observability problem can be formulated as an average
cost Markov decision process (MDP) scheduling problem. In this problem, a
policy is a rule for selecting sensors or adjusting the measuring device based
on the measurement history. Given a policy, we can evaluate the estimation
entropy for the joint state-measurement processes which inversely measures the
observability of state process for that policy. Considering estimation entropy
as the cost of a policy, we show that the problem of finding optimal policy is
equivalent to an average cost MDP scheduling problem where the cost function is
the entropy function over the belief space. This allows the application of the
policy iteration algorithm for finding the policy achieving minimum estimation
entropy, thus optimum observability.Comment: 5 pages, submitted to 2007 IEEE PerCom/PerSeNS conferenc
System-theoretic trends in econometrics
Economics;Estimation;econometrics
Introduction to Online Nonstochastic Control
This text presents an introduction to an emerging paradigm in control of
dynamical systems and differentiable reinforcement learning called online
nonstochastic control. The new approach applies techniques from online convex
optimization and convex relaxations to obtain new methods with provable
guarantees for classical settings in optimal and robust control.
The primary distinction between online nonstochastic control and other
frameworks is the objective. In optimal control, robust control, and other
control methodologies that assume stochastic noise, the goal is to perform
comparably to an offline optimal strategy. In online nonstochastic control,
both the cost functions as well as the perturbations from the assumed dynamical
model are chosen by an adversary. Thus the optimal policy is not defined a
priori. Rather, the target is to attain low regret against the best policy in
hindsight from a benchmark class of policies.
This objective suggests the use of the decision making framework of online
convex optimization as an algorithmic methodology. The resulting methods are
based on iterative mathematical optimization algorithms, and are accompanied by
finite-time regret and computational complexity guarantees.Comment: Draft; comments/suggestions welcome at
[email protected]
International Conference on Dynamic Control and Optimization - DCO 2021: book of abstracts
Sem resumo disponível.publishe
Stationary policies for the second moment stability in a class of stochastic systems
This paper presents a study on the uniform second moment stability for a class of stochastic control system. The main result states that the existence of the long-run average cost under a stationary policy is equivalent to the uniform second moment stability of the corresponding stochastic control system. To illustrate the result, a numerical example is developed to verify the uniform second moment stability of a simultaneous state-feedback control system
Recommended from our members
Control Theory: Mathematical Perspectives on Complex Networked Systems
Control theory is an interdisciplinary field that is located at the crossroads of pure and applied mathematics with systems engineering and the sciences. Its range of applicability and its techniques evolve rapidly with new developments in communication systems and electronic data processing. Thus, in recent years networked control systems emerged as a new fundamental topic, which combines complex communication structures with classical control methods and requires new mathematical methods. A substantial number of contributions to this workshop was devoted to the control of networks of systems. This was complemented by a series of lectures on other current topics like fundamentals of nonlinear control systems, model reduction and identification, algorithmic aspects in control, as well as open problems in control
- …