4,530 research outputs found

    A Hybrid Approach to Privacy-Preserving Federated Learning

    Full text link
    Federated learning facilitates the collaborative training of models without the sharing of raw data. However, recent attacks demonstrate that simply maintaining data locality during training processes does not provide sufficient privacy guarantees. Rather, we need a federated learning system capable of preventing inference over both the messages exchanged during training and the final trained model while ensuring the resulting model also has acceptable predictive accuracy. Existing federated learning approaches either use secure multiparty computation (SMC) which is vulnerable to inference or differential privacy which can lead to low accuracy given a large number of parties with relatively small amounts of data each. In this paper, we present an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs. Combining differential privacy with secure multiparty computation enables us to reduce the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust. Our system is therefore a scalable approach that protects against inference threats and produces models with high accuracy. Additionally, our system can be used to train a variety of machine learning models, which we validate with experimental results on 3 different machine learning algorithms. Our experiments demonstrate that our approach out-performs state of the art solutions

    Enhancing Privacy-Preserving Intrusion Detection in Blockchain-Based Networks with Deep Learning

    Get PDF
    Data transfer in sensitive industries such as healthcare presents significant challenges due to privacy issues, which makes it difficult to collaborate and use machine learning effectively. These issues are explored in this study by looking at how hybrid learning approaches can be used to move models between users and consumers as well as within organizations. Blockchain technology is used, compensating participants with tokens, to provide privacy-preserving data collection and safe model transfer. The proposed approach combines Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) to create a privacy-preserving secure framework for predictive analytics. LSTM-GRU-based federated learning techniques are used for local model training. The approach uses blockchain to securely transmit data to a distributed, decentralised cloud server, guaranteeing data confidentiality and privacy using a variety of storage techniques. This architecture addresses privacy issues and encourages seamless cooperation by utilising hybrid learning, federated learning, and blockchain technology. The study contributes to bridging the gap between secure data transfer and effective deep learning, specifically within sensitive domains. Experimental results demonstrate an impressive accuracy rate of 99.01%

    FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection

    Full text link
    Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSISComment: Accepted to the IEEE International Joint Conference on Biometrics (IJCB), 202
    • …
    corecore