1 research outputs found

    Secured federated learning model verification: A client-side backdoor triggered watermarking scheme

    No full text
    Federated learning (FL) has become an emerging distributed framework to build deep learning models with collaborative efforts from multiple participants. Consequently, copyright protection of FL deep model is urgently required because too many participants have access to the joint-trained model. Recently, encrypted FL framework is developed to address data leakage issue when central node is not fully trustable. This encryption process has made existing DL model watermarking schemes impossible to embed watermark at the central node. In this paper, we propose a novel clientside federated learning watermarking method to tackle the model verification issue under the encrypted FL framework. In specific, we design a backdoor-based watermarking scheme to allow model owners to embed their pre-designed noise patterns into the FL deep model. Thus, our method provides reliable copyright protection while ensuring the data privacy because the central node has no access to the encrypted gradient information. The experimental results have demonstrated the efficiency of our method in terms of both FL model performance and watermarking robustness
    corecore