2,949 research outputs found

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate

    Glimmers: Resolving the Privacy/Trust Quagmire

    Full text link
    Many successful services rely on trustworthy contributions from users. To establish that trust, such services often require access to privacy-sensitive information from users, thus creating a conflict between privacy and trust. Although it is likely impractical to expect both absolute privacy and trustworthiness at the same time, we argue that the current state of things, where individual privacy is usually sacrificed at the altar of trustworthy services, can be improved with a pragmatic GlimmerGlimmer ofof TrustTrust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used client-side -- in contrast to much recent work exploring SGX in cloud services -- to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases
    corecore