FedDGA: Federated Multitask Learning Based on Dynamic Guided Attention

Abstract

The proliferation of privacy-sensitive data has spurred the development of federated learning (FL), which is an important technology for state-of-the-art machine learning and responsible AI. However, most existing FL methods are constrained in their applicability and generalizability due to their narrow focus on specific tasks. This article presents a novel federated multitask learning (FMTL) framework that is capable of acquiring knowledge across multiple tasks. To address the challenges posed by non-IID data and task imbalance in FMTL, this study proposes a federated fusion strategy based on dynamic guided attention (FedDGA), which adaptively fine-tunes local models for multiple tasks with personalized attention. In addition, this article designed dynamic batch weight (DBW) to balance the task losses and improve the convergence speed. Extensive experiments were conducted on various datasets, tasks, and settings, and the proposed method was compared with state-of-the-art methods such as FedAvg, FedProx, and SCAFFOLD. The results show that our method achieves significant performance gains, with up to 11.1% increase in accuracy over the baselines

Similar works

Full text

thumbnail-image

Cronfa at Swansea University

redirect
Last time updated on 05/05/2025

This paper was published in Cronfa at Swansea University.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.