29,728 research outputs found

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate

    Systems for AutoML Research

    Get PDF

    A Toolkit for Generating Scalable Stochastic Multiobjective Test Problems

    Get PDF
    Real-world optimization problems typically include uncertainties over various aspects of the problem formulation. Some existing algorithms are designed to cope with stochastic multiobjective optimization problems, but in order to benchmark them, a proper framework still needs to be established. This paper presents a novel toolkit that generates scalable, stochastic, multiobjective optimization problems. A stochastic problem is generated by transforming the objective vectors of a given deterministic test problem into random vectors. All random objective vectors are bounded by the feasible objective space, defined by the deterministic problem. Therefore, the global solution for the deterministic problem can also serve as a reference for the stochastic problem. A simple parametric distribution for the random objective vector is defined in a radial coordinate system, allowing for direct control over the dual challenges of convergence towards the true Pareto front and diversity across the front. An example for a stochastic test problem, generated by the toolkit, is provided
    corecore