Privacy protection has become an increasingly pressing requirement in
distributed optimization. However, equipping distributed optimization with
differential privacy, the state-of-the-art privacy protection mechanism, will
unavoidably compromise optimization accuracy. In this paper, we propose an
algorithm to achieve rigorous ϵ-differential privacy in
gradient-tracking based distributed optimization with enhanced optimization
accuracy. More specifically, to suppress the influence of differential-privacy
noise, we propose a new robust gradient-tracking based distributed optimization
algorithm that allows both stepsize and the variance of injected noise to vary
with time. Then, we establish a new analyzing approach that can characterize
the convergence of the gradient-tracking based algorithm under both constant
and time-varying stespsizes. To our knowledge, this is the first analyzing
framework that can treat gradient-tracking based distributed optimization under
both constant and time-varying stepsizes in a unified manner. More importantly,
the new analyzing approach gives a much less conservative analytical bound on
the stepsize compared with existing proof techniques for gradient-tracking
based distributed optimization. We also theoretically characterize the
influence of differential-privacy design on the accuracy of distributed
optimization, which reveals that inter-agent interaction has a significant
impact on the final optimization accuracy. The discovery prompts us to optimize
inter-agent coupling weights to minimize the optimization error induced by the
differential-privacy design. Numerical simulation results confirm the
theoretical predictions