1,124 research outputs found

    Over-The-Air Computation in Correlated Channels

    Full text link
    This paper presents and analyzes a one-shot coding scheme for the \gls{ota} computation over a fast-fading multiple-access wireless channel. The assumed channel model incorporates correlations both in fading and noise over time as well as among users. The model also allows for non-Gaussian components in fading and noise, provided that the distributions are sub-Gaussian (as is the case for a sum of Gaussian and bounded random variables), rendering the proposed scheme robust to a large class of non-Gaussian interference and noise known to occur in many practical scenarios. OTA computation has a huge potential for reducing communication cost in applications such as Machine Learning (ML)-based distributed anomaly detection in large wireless sensor networks. We illustrate this potential through extensive numerical simulations

    Non-Coherent Over-the-Air Decentralized Stochastic Gradient Descent

    Full text link
    This paper proposes a Decentralized Stochastic Gradient Descent (DSGD) algorithm to solve distributed machine-learning tasks over wirelessly-connected systems, without the coordination of a base station. It combines local stochastic gradient descent steps with a Non-Coherent Over-The-Air (NCOTA) consensus scheme at the receivers, that enables concurrent transmissions by leveraging the waveform superposition properties of the wireless channels. With NCOTA, local optimization signals are mapped to a mixture of orthogonal preamble sequences and transmitted concurrently over the wireless channel under half-duplex constraints. Consensus is estimated by non-coherently combining the received signals with the preamble sequences and mitigating the impact of noise and fading via a consensus stepsize. NCOTA-DSGD operates without channel state information (typically used in over-the-air computation schemes for channel inversion) and leverages the channel pathloss to mix signals, without explicit knowledge of the mixing weights (typically known in consensus-based optimization). It is shown that, with a suitable tuning of decreasing consensus and learning stepsizes, the error (measured as Euclidean distance) between the local and globally optimum models vanishes with rate O(k1/4)\mathcal O(k^{-1/4}) after kk iterations. NCOTA-DSGD is evaluated numerically by solving an image classification task on the MNIST dataset, cast as a regularized cross-entropy loss minimization. Numerical results depict faster convergence vis-\`a-vis running time than implementations of the classical DSGD algorithm over digital and analog orthogonal channels, when the number of learning devices is large, under stringent delay constraints.Comment: Submitted to the IEEE Transactions on Signal Processin
    corecore