3,886 research outputs found

    RaSEC : an intelligent framework for reliable and secure multilevel edge computing in industrial environments

    Get PDF
    Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats. © 1972-2012 IEEE

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table
    • …
    corecore