5 research outputs found
UVeQFed: Universal Vector Quantization for Federated Learning
Traditional deep learning models are trained at a centralized server using
labeled data samples collected from end devices or users. Such data samples
often include private information, which the users may not be willing to share.
Federated learning (FL) is an emerging approach to train such learning models
without requiring the users to share their possibly private labeled data. In
FL, each user trains its copy of the learning model locally. The server then
collects the individual updates and aggregates them into a global model. A
major challenge that arises in this method is the need of each user to
efficiently transmit its learned model over the throughput limited uplink
channel. In this work, we tackle this challenge using tools from quantization
theory. In particular, we identify the unique characteristics associated with
conveying trained models over rate-constrained channels, and propose a suitable
quantization scheme for such settings, referred to as universal vector
quantization for FL (UVeQFed). We show that combining universal vector
quantization methods with FL yields a decentralized training system in which
the compression of the trained models induces only a minimum distortion. We
then theoretically analyze the distortion, showing that it vanishes as the
number of users grows. We also characterize the convergence of models trained
with the traditional federated averaging method combined with UVeQFed to the
model which minimizes the loss function. Our numerical results demonstrate the
gains of UVeQFed over previously proposed methods in terms of both distortion
induced in quantization and accuracy of the resulting aggregated model
Capacity Bounds for One-Bit MIMO Gaussian Channels with Analog Combining
The use of 1-bit analog-to-digital converters (ADCs) is seen as a promising
approach to significantly reduce the power consumption and hardware cost of
multiple-input multiple-output (MIMO) receivers. However, the nonlinear
distortion due to 1-bit quantization fundamentally changes the optimal
communication strategy and also imposes a capacity penalty to the system. In
this paper, the capacity of a Gaussian MIMO channel in which the antenna
outputs are processed by an analog linear combiner and then quantized by a set
of zero threshold ADCs is studied. A new capacity upper bound for the zero
threshold case is established that is tighter than the bounds available in the
literature. In addition, we propose an achievability scheme which configures
the analog combiner to create parallel Gaussian channels with phase
quantization at the output. Under this class of analog combiners, an algorithm
is presented that identifies the analog combiner and input distribution that
maximize the achievable rate. Numerical results are provided showing that the
rate of the achievability scheme is tight in the low signal-to-noise ratio
(SNR) regime. Finally, a new 1-bit MIMO receiver architecture which employs
analog temporal and spatial processing is proposed. The proposed receiver
attains the capacity in the high SNR regime.Comment: 30 pages, 9 figures, Submitted to IEEE Transactions on Communication