1,810 research outputs found

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    How does Federated Learning Impact Decision-Making in Firms: A Systematic Literature Review

    Get PDF

    Digital Ethics in Federated Learning

    Full text link
    The Internet of Things (IoT) consistently generates vast amounts of data, sparking increasing concern over the protection of data privacy and the limitation of data misuse. Federated learning (FL) facilitates collaborative capabilities among multiple parties by sharing machine learning (ML) model parameters instead of raw user data, and it has recently gained significant attention for its potential in privacy preservation and learning efficiency enhancement. In this paper, we highlight the digital ethics concerns that arise when human-centric devices serve as clients in FL. More specifically, challenges of game dynamics, fairness, incentive, and continuity arise in FL due to differences in perspectives and objectives between clients and the server. We analyze these challenges and their solutions from the perspectives of both the client and the server, and through the viewpoints of centralized and decentralized FL. Finally, we explore the opportunities in FL for human-centric IoT as directions for future development

    Game Theory Based Privacy Protection for Context-Aware Services

    Get PDF
    In the era of context-aware services, users are enjoying remarkable services based on data collected from a multitude of users. To receive services, they are at risk of leaking private information from adversaries possibly eavesdropping on the data and/or the un--trusted service platform selling off its data. Malicious adversaries may use leaked information to violate users\u27 privacy in unpredictable ways. To protect users\u27 privacy, many algorithms are proposed to protect users\u27 sensitive information by adding noise, thus causing context-aware service quality loss. Game theory has been utilized as a powerful tool to balance the tradeoff between privacy protection level and service quality. However, most of the existing schemes fail to depict the mutual relationship between any two parties involved: user, platform, and adversary. There is also an oversight to formulate the interaction occurring between multiple users, as well as the interaction between any two attributes. To solve these issues, this dissertation firstly proposes a three-party game framework to formulate the mutual interaction between three parties and study the optimal privacy protection level for context-aware services, thus optimize the service quality. Next, this dissertation extends the framework to a multi-user scenario and proposes a two-layer three-party game framework. This makes the proposed framework more realistic by further exploring the interaction, not only between different parties, but also between users. Finally, we focus on analyzing the impact of long-term time-serial data and the active actions of the platform and adversary. To achieve this objective, we design a three-party Stackelberg game model to help the user to decide whether to update information and the granularity of updated information
    • …
    corecore