We introduce and analyze an algorithm for the minimization of convex
functions that are the sum of differentiable terms and proximable terms
composed with linear operators. The method builds upon the recently developed
smoothed gap technique. In addition to a precise convergence rate result, valid
even in the presence of linear inclusion constraints, this new method allows an
explicit treatment of the gradient of differentiable functions and can be
enhanced with line-search. We also study the consequences of restarting the
acceleration of the algorithm at a given frequency. These new features are not
classical for primal-dual methods and allow us to solve difficult large-scale
convex optimization problems. We numerically illustrate the superior
performance of the algorithm on basis pursuit, TV-regularized least squares
regression and L1 regression problems against the state-of-the-art.Comment: 26 pages, 5 figure