Building upon recent works on linesearch-free adaptive proximal gradient
methods, this paper proposes AdaPGÏ€,r, a framework that unifies and
extends existing results by providing larger stepsize policies and improved
lower bounds. Different choices of the parameters π and r are discussed
and the efficacy of the resulting methods is demonstrated through numerical
simulations. In an attempt to better understand the underlying theory, its
convergence is established in a more general setting that allows for
time-varying parameters. Finally, an adaptive alternating minimization
algorithm is presented by exploring the dual setting. This algorithm not only
incorporates additional adaptivity, but also expands its applicability beyond
standard strongly convex settings