5 research outputs found

    Enhancing Privacy-Preserving Intrusion Detection in Blockchain-Based Networks with Deep Learning

    Get PDF
    Data transfer in sensitive industries such as healthcare presents significant challenges due to privacy issues, which makes it difficult to collaborate and use machine learning effectively. These issues are explored in this study by looking at how hybrid learning approaches can be used to move models between users and consumers as well as within organizations. Blockchain technology is used, compensating participants with tokens, to provide privacy-preserving data collection and safe model transfer. The proposed approach combines Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) to create a privacy-preserving secure framework for predictive analytics. LSTM-GRU-based federated learning techniques are used for local model training. The approach uses blockchain to securely transmit data to a distributed, decentralised cloud server, guaranteeing data confidentiality and privacy using a variety of storage techniques. This architecture addresses privacy issues and encourages seamless cooperation by utilising hybrid learning, federated learning, and blockchain technology. The study contributes to bridging the gap between secure data transfer and effective deep learning, specifically within sensitive domains. Experimental results demonstrate an impressive accuracy rate of 99.01%

    Feature Inference Attack on Model Predictions in Vertical Federated Learning

    Full text link
    Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.Comment: 12 page
    corecore