We propose a general learning framework for the protection mechanisms that
protects privacy via distorting model parameters, which facilitates the
trade-off between privacy and utility. The algorithm is applicable to arbitrary
privacy measurements that maps from the distortion to a real value. It can
achieve personalized utility-privacy trade-off for each model parameter, on
each client, at each communication round in federated learning. Such adaptive
and fine-grained protection can improve the effectiveness of privacy-preserved
federated learning.
Theoretically, we show that gap between the utility loss of the protection
hyperparameter output by our algorithm and that of the optimal protection
hyperparameter is sub-linear in the total number of iterations. The
sublinearity of our algorithm indicates that the average gap between the
performance of our algorithm and that of the optimal performance goes to zero
when the number of iterations goes to infinity. Further, we provide the
convergence rate of our proposed algorithm. We conduct empirical results on
benchmark datasets to verify that our method achieves better utility than the
baseline methods under the same privacy budget