Finding a zero of a maximal monotone operator is fundamental in convex
optimization and monotone operator theory, and \emph{proximal point algorithm}
(PPA) is a primary method for solving this problem. PPA converges not only
globally under fairly mild conditions but also asymptotically at a fast linear
rate provided that the underlying inverse operator is Lipschitz continuous at
the origin. These nice convergence properties are preserved by a relaxed
variant of PPA. Recently, a linear convergence bound was established in [M.
Tao, and X. M. Yuan, J. Sci. Comput., 74 (2018), pp. 826-850] for the relaxed
PPA, and it was shown that the bound is optimal when the relaxation factor
γ lies in [1,2). However, for other choices of γ, the bound
obtained by Tao and Yuan is suboptimal. In this paper, we establish tight
linear convergence bounds for any choice of γ∈(0,2) and make the whole
picture about optimal linear convergence bounds clear. These results sharpen
our understandings to the asymptotic behavior of the relaxed PPA.Comment: 9 pages and 1 figur