In this work, we revisit a classical incremental implementation of the
primal-descent dual-ascent gradient method used for the solution of equality
constrained optimization problems. We provide a short proof that establishes
the linear (exponential) convergence of the algorithm for smooth
strongly-convex cost functions and study its relation to the non-incremental
implementation. We also study the effect of the augmented Lagrangian penalty
term on the performance of distributed optimization algorithms for the
minimization of aggregate cost functions over multi-agent networks