587 research outputs found
High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning
Edge machine learning involves the deployment of learning algorithms at the
wireless network edge so as to leverage massive mobile data for enabling
intelligent applications. The mainstream edge learning approach, federated
learning, has been developed based on distributed gradient descent. Based on
the approach, stochastic gradients are computed at edge devices and then
transmitted to an edge server for updating a global AI model. Since each
stochastic gradient is typically high-dimensional (with millions to billions of
coefficients), communication overhead becomes a bottleneck for edge learning.
To address this issue, we propose in this work a novel framework of
hierarchical stochastic gradient quantization and study its effect on the
learning performance. First, the framework features a practical hierarchical
architecture for decomposing the stochastic gradient into its norm and
normalized block gradients, and efficiently quantizes them using a uniform
quantizer and a low-dimensional codebook on a Grassmann manifold, respectively.
Subsequently, the quantized normalized block gradients are scaled and cascaded
to yield the quantized normalized stochastic gradient using a so-called hinge
vector designed under the criterion of minimum distortion. The hinge vector is
also efficiently compressed using another low-dimensional Grassmannian
quantizer. The other feature of the framework is a bit-allocation scheme for
reducing the quantization error. The scheme determines the resolutions of the
low-dimensional quantizers in the proposed framework. The framework is proved
to guarantee model convergency by analyzing the convergence rate as a function
of the quantization bits. Furthermore, by simulation, our design is shown to
substantially reduce the communication overhead compared with the
state-of-the-art signSGD scheme, while both achieve similar learning
accuracies
Multiple Hypothesis Dropout: Estimating the Parameters of Multi-Modal Output Distributions
In many real-world applications, from robotics to pedestrian trajectory
prediction, there is a need to predict multiple real-valued outputs to
represent several potential scenarios. Current deep learning techniques to
address multiple-output problems are based on two main methodologies: (1)
mixture density networks, which suffer from poor stability at high dimensions,
or (2) multiple choice learning (MCL), an approach that uses single-output
functions, each only producing a point estimate hypothesis. This paper presents
a Mixture of Multiple-Output functions (MoM) approach using a novel variant of
dropout, Multiple Hypothesis Dropout. Unlike traditional MCL-based approaches,
each multiple-output function not only estimates the mean but also the variance
for its hypothesis. This is achieved through a novel stochastic winner-take-all
loss which allows each multiple-output function to estimate variance through
the spread of its subnetwork predictions. Experiments on supervised learning
problems illustrate that our approach outperforms existing solutions for
reconstructing multimodal output distributions. Additional studies on
unsupervised learning problems show that estimating the parameters of latent
posterior distributions within a discrete autoencoder significantly improves
codebook efficiency, sample quality, precision and recall.Comment: To appear in Proceedings of the 38th AAAI Conference on Artificial
Intelligence (AAAI-24). 13 pages (9 main, 4 appendix
- …