5 research outputs found
Optimization-based Lyapunov function construction for continuous-time Markov chains with affine transition rates
We address the problem of Lyapunov function construction for a class of
continuous-time Markov chains with affine transition rates, typically
encountered in stochastic chemical kinetics. Following an optimization
approach, we take advantage of existing bounds from the Foster-Lyapunov
stability theory to obtain functions that enable us to estimate the region of
high stationary probability, as well as provide upper bounds on moments of the
chain. Our method can be used to study the stationary behavior of a given chain
without resorting to stochastic simulation, in a fast and efficient manner
Control Theory Meets POMDPs: A Hybrid Systems Approach
Partially observable Markov decision processes(POMDPs) provide a modeling framework for a variety of sequential decision making under uncertainty scenarios in artificial intelligence (AI). Since the states are not directly observable ina POMDP, decision making has to be performed based on the output of a Bayesian filter (continuous beliefs); hence, making POMDPs intractable to solve and analyze. To overcome the complexity challenge of POMDPs, we apply techniques from control theory. Our contributions are fourfold: (i) We begin by casting the problem of analyzing a POMDP into analyzing the behavior of a discrete-time switched system. Then, (ii) in order to estimate the reachable belief space of a POMDP, i.e., the set of all possible evolutions given an initial belief distribution over the states and a set of actions and observations, we find over-approximations in terms of sub-level sets of Lyapunov-like functions. Furthermore, (iii) in order to verify safety and performance requirements of a given POMDP, we formulate a barrier certificate theorem