1 research outputs found
Mitigating Group Bias in Federated Learning for Heterogeneous Devices
Federated Learning is emerging as a privacy-preserving model training
approach in distributed edge applications. As such, most edge deployments are
heterogeneous in nature i.e., their sensing capabilities and environments vary
across deployments. This edge heterogeneity violates the independence and
identical distribution (IID) property of local data across clients and produces
biased global models i.e. models that contribute to unfair decision-making and
discrimination against a particular community or a group. Existing bias
mitigation techniques only focus on bias generated from label heterogeneity in
non-IID data without accounting for domain variations due to feature
heterogeneity and do not address global group-fairness property.
Our work proposes a group-fair FL framework that minimizes group-bias while
preserving privacy and without resource utilization overhead. Our main idea is
to leverage average conditional probabilities to compute a cross-domain group
\textit{importance weights} derived from heterogeneous training data to
optimize the performance of the worst-performing group using a modified
multiplicative weights update method. Additionally, we propose regularization
techniques to minimize the difference between the worst and best-performing
groups while making sure through our thresholding mechanism to strike a balance
between bias reduction and group performance degradation. Our evaluation of
human emotion recognition and image classification benchmarks assesses the fair
decision-making of our framework in real-world heterogeneous settings