Ensuring safety of nonlinear systems under model uncertainty and external
disturbances is crucial, especially for real-world control tasks. Predictive
methods such as robust model predictive control (RMPC) require solving
nonconvex optimization problems online, which leads to high computational
burden and poor scalability. Reinforcement learning (RL) works well with
complex systems, but pays the price of losing rigorous safety guarantee. This
paper presents a theoretical framework that bridges the advantages of both RMPC
and RL to synthesize safety filters for nonlinear systems with state- and
action-dependent uncertainty. We decompose the robust invariant set (RIS) into
two parts: a target set that aligns with terminal region design of RMPC, and a
reach-avoid set that accounts for the rest of RIS. We propose a policy
iteration approach for robust reach-avoid problems and establish its monotone
convergence. This method sets the stage for an adversarial actor-critic deep RL
algorithm, which simultaneously synthesizes a reach-avoid policy network, a
disturbance policy network, and a reach-avoid value network. The learned
reach-avoid policy network is utilized to generate nominal trajectories for
online verification, which filters potentially unsafe actions that may drive
the system into unsafe regions when worst-case disturbances are applied. We
formulate a second-order cone programming (SOCP) approach for online
verification using system level synthesis, which optimizes for the worst-case
reach-avoid value of any possible trajectories. The proposed safety filter
requires much lower computational complexity than RMPC and still enjoys
persistent robust safety guarantee. The effectiveness of our method is
illustrated through a numerical example