26,203 research outputs found
Distributed Learning with Compressed Gradient Differences
Training large machine learning models requires a distributed computing
approach, with communication of the model updates being the bottleneck. For
this reason, several methods based on the compression (e.g., sparsification
and/or quantization) of updates were recently proposed, including QSGD
(Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein et
al., 2018), and DQGD (Khirirat et al., 2018). However, none of these methods
are able to learn the gradients, which renders them incapable of converging to
the true optimum in the batch mode, incompatible with non-smooth regularizers,
and slows down their convergence. In this work we propose a new distributed
learning method --- DIANA --- which resolves these issues via compression of
gradient differences. We perform a theoretical analysis in the strongly convex
and nonconvex settings and show that our rates are superior to existing rates.
Our analysis of block-quantization and differences between and
quantization closes the gaps in theory and practice. Finally, by
applying our analysis technique to TernGrad, we establish the first convergence
rate for this method.Comment: 46 page
Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric
Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
Natural Compression for Distributed Deep Learning
Modern deep learning models are often trained in parallel over a collection
of distributed machines to reduce training time. In such settings,
communication of model updates among machines becomes a significant performance
bottleneck and various lossy update compression techniques have been proposed
to alleviate this problem. In this work, we introduce a new, simple yet
theoretically and practically effective compression technique: {\em natural
compression (NC)}. Our technique is applied individually to all entries of the
to-be-compressed update vector and works by randomized rounding to the nearest
(negative or positive) power of two, which can be computed in a "natural" way
by ignoring the mantissa. We show that compared to no compression, NC increases
the second moment of the compressed vector by not more than the tiny factor
\nicefrac{9}{8}, which means that the effect of NC on the convergence speed
of popular training algorithms, such as distributed SGD, is negligible.
However, the communications savings enabled by NC are substantial, leading to
{\em - improvement in overall theoretical running time}. For
applications requiring more aggressive compression, we generalize NC to {\em
natural dithering}, which we prove is {\em exponentially better} than the
common random dithering technique. Our compression operators can be used on
their own or in combination with existing operators for a more aggressive
combined effect, and offer new state-of-the-art both in theory and practice.Comment: 8 pages, 20 pages of Appendix, 6 Tables, 14 Figure
- …