32,425 research outputs found

    Reducing Communication Overhead of the Subset Difference Scheme

    Get PDF
    In Broadcast Encryption (BE) systems like Pay-TV, AACS, online content sharing and broadcasting, reducing the header length (communication overhead per session) is of practical interest. The Subset Difference (SD) scheme due to Naor-Naor-Lotspiech (NNL) is the most popularly used BE scheme. We introduce the (a, b, γ) augmented binary tree subset difference ( (a, b, γ) -ABTSD) scheme which is a generalization of the NNL-SD scheme. By varying the parameters (a, b, γ) , it is possible to obtain O(n log n) different schemes. The average header length achieved by the new schemes is smaller than all known schemes having the same decryption time as that of the NNL-SD scheme and achieving non-trivial trade-offs between the user storage and the header size. The amount of key material that a user is required to store increases. For the earlier mentioned applications, reducing header size and achieving fast decryption is perhaps more of a concern than the user storage

    Universal secure rank-metric coding schemes with optimal communication overheads

    Full text link
    We study the problem of reducing the communication overhead from a noisy wire-tap channel or storage system where data is encoded as a matrix, when more columns (or their linear combinations) are available. We present its applications to reducing communication overheads in universal secure linear network coding and secure distributed storage with crisscross errors and erasures and in the presence of a wire-tapper. Our main contribution is a method to transform coding schemes based on linear rank-metric codes, with certain properties, to schemes with lower communication overheads. By applying this method to pairs of Gabidulin codes, we obtain coding schemes with optimal information rate with respect to their security and rank error correction capability, and with universally optimal communication overheads, when nm n \leq m , being n n and m m the number of columns and number of rows, respectively. Moreover, our method can be applied to other families of maximum rank distance codes when n>m n > m . The downside of the method is generally expanding the packet length, but some practical instances come at no cost.Comment: 21 pages, LaTeX; parts of this paper have been accepted for presentation at the IEEE International Symposium on Information Theory, Aachen, Germany, June 201

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate

    Two-tier channel estimation aided near-capacity MIMO transceivers relying on norm-based joint transmit and receive antenna selection

    No full text
    We propose a norm-based joint transmit and receive antenna selection (NBJTRAS) aided near-capacity multiple-input multiple-output (MIMO) system relying on the assistance of a novel two-tier channel estimation scheme. Specifically, a rough estimate of the full MIMO channel is first generated using a low-complexity, low-training-overhead minimum mean square error based channel estimator, which relies on reusing a modest number of radio frequency (RF) chains. NBJTRAS is then carried out based on this initial full MIMO channel estimate. The NBJTRAS aided MIMO system is capable of significantly outperforming conventional MIMO systems equipped with the same modest number of RF chains, while dispensing with the idealised simplifying assumption of having perfectly known channel state information (CSI). Moreover, the initial subset channel estimate associated with the selected subset MIMO channel matrix is then used for activating a powerful semi-blind joint channel estimation and turbo detector-decoder, in which the channel estimate is refined by a novel block-of-bits selection based soft-decision aided channel estimator (BBSB-SDACE) embedded in the iterative detection and decoding process. The joint channel estimation and turbo detection-decoding scheme operating with the aid of the proposed BBSB-SDACE channel estimator is capable of approaching the performance of the near-capacity maximumlikelihood (ML) turbo transceiver associated with perfect CSI. This is achieved without increasing the complexity of the ML turbo detection and decoding process

    LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning

    Get PDF
    Gradient-based distributed learning in Parameter Server (PS) computing architectures is subject to random delays due to straggling worker nodes, as well as to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding, worker grouping, and adaptive worker selection. This paper provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of gradient coding and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named Lazily Aggregated Gradient Coding (LAGC) and Grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance, while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.Comment: Submitte
    corecore