Composite optimization problems, where the sum of a smooth and a merely lower
semicontinuous function has to be minimized, are often tackled numerically by
means of proximal gradient methods as soon as the lower semicontinuous part of
the objective function is of simple enough structure. The available convergence
theory associated with these methods (mostly) requires the derivative of the
smooth part of the objective function to be (globally) Lipschitz continuous,
and this might be a restrictive assumption in some practically relevant
scenarios. In this paper, we readdress this classical topic and provide
convergence results for the classical (monotone) proximal gradient method and
one of its nonmonotone extensions which are applicable in the absence of
(strong) Lipschitz assumptions. This is possible since, for the price of
forgoing convergence rates, we omit the use of descent-type lemmas in our
analysis.Comment: 23 page