12 research outputs found

    FederBoost: Private Federated Learning for GBDT

    Full text link
    An emerging trend in machine learning and artificial intelligence is federated learning (FL), which allows multiple participants to contribute various training data to train a better model. It promises to keep the training data local for each participant, leading to low communication complexity and high privacy. However, there are still two problems in FL remain unsolved: (1) unable to handle vertically partitioned data, and (2) unable to support decision trees. Existing FL solutions for vertically partitioned data or decision trees require heavy cryptographic operations. In this paper, we propose a framework named FederBoost for private federated learning of gradient boosting decision trees (GBDT). It supports running GBDT over both horizontally and vertically partitioned data. The key observation for designing FederBoost is that the whole training process of GBDT relies on the order of the data instead of the values. Consequently, vertical FederBoost does not require any cryptographic operation and horizontal FederBoost only requires lightweight secure aggregation. We fully implement FederBoost and evaluate its utility and efficiency through extensive experiments performed on three public datasets. Our experimental results show that both vertical and horizontal FederBoost achieve the same level of AUC with centralized training where all data are collected in a central server; and both of them can finish training within half an hour even in WAN.Comment: 15 pages, 8 figure

    Theory meets practice at the median : a worst case comparison of relative error quantile algorithms

    Get PDF
    Estimating the distribution and quantiles of data is a foundational task in data mining and data science. We study algorithms which provide accurate results for extreme quantile queries using a small amount of space, thus helping to understand the tails of the input distribution. Namely, we focus on two recent state-of-the-art solutions: t-digest and ReqSketch. While t-digest is a popular compact summary which works well in a variety of settings, ReqSketch comes with formal accuracy guarantees at the cost of its size growing as new observations are inserted. In this work, we provide insight into which conditions make one preferable to the other. Namely, we show how to construct inputs for t-digest that induce an almost arbitrarily large error and demonstrate that it fails to provide accurate results even on i.i.d. samples from a highly non-uniform distribution. We propose practical improvements to ReqSketch, making it faster than t-digest, while its error stays bounded on any instance. Still, our results confirm that t-digest remains more accurate on the "non-adversarial" data encountered in practice
    corecore