We propose a framework for the stability verification of Mixed-Integer Linear
Programming (MILP) representable control policies. This framework compares a
fixed candidate policy, which admits an efficient parameterization and can be
evaluated at a low computational cost, against a fixed baseline policy, which
is known to be stable but expensive to evaluate. We provide sufficient
conditions for the closed-loop stability of the candidate policy in terms of
the worst-case approximation error with respect to the baseline policy, and we
show that these conditions can be checked by solving a Mixed-Integer Quadratic
Program (MIQP). Additionally, we demonstrate that an outer and inner
approximation of the stability region of the candidate policy can be computed
by solving an MILP. The proposed framework is sufficiently general to
accommodate a broad range of candidate policies including ReLU Neural Networks
(NNs), optimal solution maps of parametric quadratic programs, and Model
Predictive Control (MPC) policies. We also present an open-source toolbox in
Python based on the proposed framework, which allows for the easy verification
of custom NN architectures and MPC formulations. We showcase the flexibility
and reliability of our framework in the context of a DC-DC power converter case
study and investigate its computational complexity