One of the key assumptions in the stability and convergence analysis of
variational regularization is the ability of finding global minimizers.
However, such an assumption is often not feasible when the regularizer is a
black box or non-convex making the search for global minimizers of the involved
Tikhonov functional a challenging task. This is in particular the case for the
emerging class of learned regularizers defined by neural networks. Instead,
standard minimization schemes are applied which typically only guarantee that a
critical point is found. To address this issue, in this paper we study
stability and convergence properties of critical points of Tikhonov functionals
with a possible non-convex regularizer. To this end, we introduce the concept
of relative sub-differentiability and study its basic properties. Based on this
concept, we develop a convergence analysis assuming relative
sub-differentiability of the regularizer. The rationale behind the proposed
concept is that critical points of the Tikhonov functional are also relative
critical points and that for the latter a convergence theory can be developed.
For the case where the noise level tends to zero, we derive a limiting problem
representing first-order optimality conditions of a related restricted
optimization problem. Besides this, we also give a comparison with classical
methods and show that the class of ReLU-networks are appropriate choices for
the regularization functional. Finally, we provide numerical simulations that
support our theoretical findings and the need for the sort of analysis that we
provide in this paper