Infinite-horizon optimal control of constrained piecewise affine (PWA)
systems has been approximately addressed by hybrid model predictive control
(MPC), which, however, has computational limitations, both in offline design
and online implementation. In this paper, we consider an alternative approach
based on approximate dynamic programming (ADP), an important class of methods
in reinforcement learning. We accommodate non-convex union-of-polyhedra state
constraints and linear input constraints into ADP by designing PWA penalty
functions. PWA function approximation is used, which allows for a mixed-integer
encoding to implement ADP. The main advantage of the proposed ADP method is its
online computational efficiency. Particularly, we propose two control policies,
which lead to solving a smaller-scale mixed-integer linear program than
conventional hybrid MPC, or a single convex quadratic program, depending on
whether the policy is implicitly determined online or explicitly computed
offline. We characterize the stability and safety properties of the closed-loop
systems, as well as the sub-optimality of the proposed policies, by quantifying
the approximation errors of value functions and policies. We also develop an
offline mixed-integer linear programming-based method to certify the
reliability of the proposed method. Simulation results on an inverted pendulum
with elastic walls and on an adaptive cruise control problem validate the
control performance in terms of constraint satisfaction and CPU time