Next-generation wireless networks, such as edge intelligence and wireless
distributed learning, face two critical challenges: communication efficiency
and privacy protection. In this work, our focus is on addressing these issues
in a distributed learning framework. We consider a new approach that
simultaneously achieves communication efficiency and privacy protection by
exploiting the privacy advantage offered by quantization. Specifically, we use
a quantization scheme called \textbf{Gau}ssian \textbf{L}ayered
\textbf{R}andomized \textbf{Q}uantization (Gau-LRQ) that compresses the raw
model gradients using a layer multishift coupler. By adjusting the parameters
of Gau-LRQ, we shape the quantization error to follow the expected Gaussian
distribution, thus ensuring client-level differential privacy (CLDP). We
demonstrate the effectiveness of our proposed Gau-LRQ in the distributed
stochastic gradient descent (SGD) framework and theoretically quantify the
trade-offs between communication, privacy, and convergence performance. We
further improve the convergence performance by enabling dynamic private budget
and quantization bit allocation. We achieve this by using an optimization
formula that minimizes convergence error subject to the privacy budget
constraint. We evaluate our approach on multiple datasets, including MNIST,
CIFAR-10, and CIFAR-100, and show that our proposed method outperforms the
baselines in terms of learning performance under various privacy constraints.
Moreover, we observe that dynamic privacy allocation yields additional accuracy
improvements for the models compared to the fixed scheme