18 research outputs found

    Distributed Hypothesis Testing Over Multi-Access Channels

    Get PDF
    International audienceConsider distributed hypothesis testing over multiple-access channels (MACs), where the receiver wishes to maximize the type-II error exponent under a constrained type-I error probability. For this setup, we propose a scheme that combines hybrid coding with a MAC-version of Borades unequal error protection. It achieves the optimal type-II error exponent for a generalization of testing against independence over an orthogonal MAC when the transmitters' sources are independent. In this case, hybrid coding can be replaced by the simpler separate source-channel coding. The paper also presents upper and lower bounds on the optimal type-II error exponent for generalized testing against independence of Gaussian sources over a Gaussian MAC. The bounds are close and significantly larger than a type-II error exponent that is achievable using separate source-channel coding

    Distributed Hypothesis Testing with Privacy Constraints

    Full text link
    We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example

    M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion

    Full text link
    In federated learning (FL), the communication constraint between the remote learners and the Parameter Server (PS) is a crucial bottleneck. For this reason, model updates must be compressed so as to minimize the loss in accuracy resulting from the communication constraint. This paper proposes ``\emph{M{\bf M}-magnitude weighted L2L_{\bf 2} distortion + 2\bf 2 degrees of freedom''} (M22) algorithm, a rate-distortion inspired approach to gradient compression for federated training of deep neural networks (DNNs). In particular, we propose a family of distortion measures between the original gradient and the reconstruction we referred to as ``MM-magnitude weighted L2L_2'' distortion, and we assume that gradient updates follow an i.i.d. distribution -- generalized normal or Weibull, which have two degrees of freedom. In both the distortion measure and the gradient, there is one free parameter for each that can be fitted as a function of the iteration number. Given a choice of gradient distribution and distortion measure, we design the quantizer minimizing the expected distortion in gradient reconstruction. To measure the gradient compression performance under a communication constraint, we define the \emph{per-bit accuracy} as the optimal improvement in accuracy that one bit of communication brings to the centralized model over the training period. Using this performance measure, we systematically benchmark the choice of gradient distribution and distortion measure. We provide substantial insights on the role of these choices and argue that significant performance improvements can be attained using such a rate-distortion inspired compressor.Comment: arXiv admin note: text overlap with arXiv:2202.0281
    corecore