3 research outputs found
Privacy in Multimodal Federated Human Activity Recognition
Human Activity Recognition (HAR) training data is often privacy-sensitive or
held by non-cooperative entities. Federated Learning (FL) addresses such
concerns by training ML models on edge clients. This work studies the impact of
privacy in federated HAR at a user, environment, and sensor level. We show that
the performance of FL for HAR depends on the assumed privacy level of the FL
system and primarily upon the colocation of data from different sensors. By
avoiding data sharing and assuming privacy at the human or environment level,
as prior works have done, the accuracy decreases by 5-7%. However, extending
this to the modality level and strictly separating sensor data between multiple
clients may decrease the accuracy by 19-42%. As this form of privacy is
necessary for the ethical utilisation of passive sensing methods in HAR, we
implement a system where clients mutually train both a general FL model and a
group-level one per modality. Our evaluation shows that this method leads to
only a 7-13% decrease in accuracy, making it possible to build HAR systems with
diverse hardware.Comment: In 3rd On-Device Intelligence Workshop at MLSys 2023, 8 page