13,459 research outputs found
Variable Splitting Methods for Constrained State Estimation in Partially Observed Markov Processes
In this paper, we propose a class of efficient, accurate, and general methods
for solving state-estimation problems with equality and inequality constraints.
The methods are based on recent developments in variable splitting and
partially observed Markov processes. We first present the generalized framework
based on variable splitting, then develop efficient methods to solve the
state-estimation subproblems arising in the framework. The solutions to these
subproblems can be made efficient by leveraging the Markovian structure of the
model as is classically done in so-called Bayesian filtering and smoothing
methods. The numerical experiments demonstrate that our methods outperform
conventional optimization methods in computation cost as well as the estimation
performance.Comment: 3 figure
The Capacity of Channels with Feedback
We introduce a general framework for treating channels with memory and
feedback. First, we generalize Massey's concept of directed information and use
it to characterize the feedback capacity of general channels. Second, we
present coding results for Markov channels. This requires determining
appropriate sufficient statistics at the encoder and decoder. Third, a dynamic
programming framework for computing the capacity of Markov channels is
presented. Fourth, it is shown that the average cost optimality equation (ACOE)
can be viewed as an implicit single-letter characterization of the capacity.
Fifth, scenarios with simple sufficient statistics are described
Controlled diffusion processes
This article gives an overview of the developments in controlled diffusion
processes, emphasizing key results regarding existence of optimal controls and
their characterization via dynamic programming for a variety of cost criteria
and structural assumptions. Stochastic maximum principle and control under
partial observations (equivalently, control of nonlinear filters) are also
discussed. Several other related topics are briefly sketched.Comment: Published at http://dx.doi.org/10.1214/154957805100000131 in the
Probability Surveys (http://www.i-journals.org/ps/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems
Learning-based control algorithms require data collection with abundant
supervision for training. Safe exploration algorithms ensure the safety of this
data collection process even when only partial knowledge is available. We
present a new approach for optimal motion planning with safe exploration that
integrates chance-constrained stochastic optimal control with dynamics learning
and feedback control. We derive an iterative convex optimization algorithm that
solves an \underline{Info}rmation-cost \underline{S}tochastic
\underline{N}onlinear \underline{O}ptimal \underline{C}ontrol problem
(Info-SNOC). The optimization objective encodes both optimal performance and
exploration for learning, and the safety is incorporated as distributionally
robust chance constraints. The dynamics are predicted from a robust regression
model that is learned from data. The Info-SNOC algorithm is used to compute a
sub-optimal pool of safe motion plans that aid in exploration for learning
unknown residual dynamics under safety constraints. A stable feedback
controller is used to execute the motion plan and collect data for model
learning. We prove the safety of rollout from our exploration method and
reduction in uncertainty over epochs, thereby guaranteeing the consistency of
our learning method. We validate the effectiveness of Info-SNOC by designing
and implementing a pool of safe trajectories for a planar robot. We demonstrate
that our approach has higher success rate in ensuring safety when compared to a
deterministic trajectory optimization approach.Comment: Submitted to RA-L 2020, review-
- …