1 research outputs found
Data-driven optimal control with a relaxed linear program
The linear programming (LP) approach has a long history in the theory of
approximate dynamic programming. When it comes to computation, however, the LP
approach often suffers from poor scalability. In this work, we introduce a
relaxed version of the Bellman operator for q-functions and prove that it is
still a monotone contraction mapping with a unique fixed point. In the spirit
of the LP approach, we exploit the new operator to build a relaxed linear
program (RLP). Compared to the standard LP formulation, our RLP has only one
family of constraints and half the decision variables, making it more scalable
and computationally efficient. For deterministic systems, the RLP trivially
returns the correct q-function. For stochastic linear systems in continuous
spaces, the solution to the RLP preserves the minimizer of the optimal
q-function, hence retrieves the optimal policy. Theoretical results are backed
up in simulation where we solve sampled versions of the LPs with data collected
by interacting with the environment. For general nonlinear systems, we observe
that the RLP again tends to preserve the minimizers of the solution to the LP,
though the relative performance is influenced by the specific geometry of the
problem