5 research outputs found

    Distributed and Private Coded Matrix Computation with Flexible Communication Load

    Full text link
    Tensor operations, such as matrix multiplication, are central to large-scale machine learning applications. For user-driven tasks these operations can be carried out on a distributed computing platform with a master server at the user side and multiple workers in the cloud operating in parallel. For distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay, yielding a trade-off between recovery threshold and communication load. In this paper we impose an additional security constraint on the data matrices and assume that workers can collude to eavesdrop on the content of these data matrices. Specifically, we introduce a novel class of secure codes, referred to as secure generalized PolyDot codes, that generalizes previously published non-secure versions of these codes for matrix multiplication. These codes extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers.Comment: 8 pages, 6 figures, submitted to 2019 IEEE International Symposium on Information Theory (ISIT

    Coded Federated Computing in Wireless Networks with Straggling Devices and Imperfect CSI

    Full text link
    Distributed computing platforms typically assume the availability of reliable and dedicated connections among the processors. This work considers an alternative scenario, relevant for wireless data centers and federated learning, in which the distributed processors, operating on generally distinct coded data, are connected via shared wireless channels accessed via full-duplex transmission. The study accounts for both wireless and computing impairments, including interference, imperfect Channel State Information, and straggling processors, and it assumes a Map-Shuffle-Reduce coded computing paradigm. The total latency of the system, obtained as the sum of computing and communication delays, is studied for different shuffling strategies revealing the interplay between distributed computing, coding, and cooperative or coordinated transmission.Comment: Submitted for possible conference publicatio

    Wireless Map-Reduce Distributed Computing with Full-Duplex Radios and Imperfect CSI

    No full text
    Consider a distributed computing system in which the worker nodes are connected over a shared wireless channel. Nodes can store a fraction of the data set over which computation needs to be carried out, and a Map-Shuffle-Reduce protocol is followed in order to enable collaborative processing. If there is exists some level of redundancy among the computations performed at the nodes, the inter-node communication load during the Shuffle phase can be reduced by using either coded multicasting or cooperative transmission. It was previously shown that the latter approach is able to reduce the high-Signal-to-Noise Ratio communication load by half in the presence of full-duplex nodes and perfect transmit-side Channel State Information (CSI). In this paper, a novel scheme based on superposition coding is proposed that is demonstrated to outperform both coded multicasting and cooperative transmission under the assumption of imperfect CSI
    corecore