1,621 research outputs found

    Vertical Federated Learning

    Full text link
    Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with different features about the same set of users jointly train machine learning models without exposing their raw data or model parameters. Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy. We provide an exhaustive categorization for VFL settings and privacy-preserving protocols and comprehensively analyze the privacy attacks and defense strategies for each protocol. In the end, we propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, and effectiveness constraints. Finally, we review the most recent advances in industrial applications, highlighting open challenges and future directions for VFL

    Federated Machine Learning

    Get PDF
    In recent times, machine gaining knowledge has transformed areas such as processer visualisation, morphological and speech identification and processing. The implementation of machine learning is frim built on data and gathering the data in confidentiality disturbing circumstances. The studying of amalgamated systems and methods is an innovative area of modern technological field that facilitates the training within models without gathering the information. As an alternative to transferring the information, clients co-operate together to train a model be only delivering weights updates to the server. While this concerning privacy is better and more adaptable in some circumstances very expensive. This thesis generally introduces some of the fundamental theories, structural design and procedures of federated machine learning and its prospective in numerous applications. Some optimisation methods and some privacy ensuring systems like differential privacy also reviewed

    The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost

    Full text link
    Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. In this context, a new distributed learning paradigm termed federated learning becomes prominent recently to tackle the privacy issues in distributed learning, where only learning models will be transmitted from the distributed nodes to servers without revealing users' own data and hence protecting the privacy of users. In this paper, we propose a horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. We compute parameters based on these virtual data samples in the local nodes and aggregate the learning model in the central server. In the learning model upgrading process, we focus more on the wrongly classified data before in the virtual sample and hence to generate sparse learning model parameters. By carefully controlling the size of these groups of samples, we can achieve a tradeoff between privacy and learning performance. Our experimental results show the effectiveness of our proposed scheme by comparing with existing state-of-the-arts
    • …
    corecore