3 research outputs found
Federated Learning with Heterogeneous Labels and Models for Mobile Activity Monitoring
Various health-care applications such as assisted living, fall detection,
etc., require modeling of user behavior through Human Activity Recognition
(HAR). Such applications demand characterization of insights from multiple
resource-constrained user devices using machine learning techniques for
effective personalized activity monitoring. On-device Federated Learning proves
to be an effective approach for distributed and collaborative machine learning.
However, there are a variety of challenges in addressing statistical (non-IID
data) and model heterogeneities across users. In addition, in this paper, we
explore a new challenge of interest -- to handle heterogeneities in labels
(activities) across users during federated learning. To this end, we propose a
framework for federated label-based aggregation, which leverages overlapping
information gain across activities using Model Distillation Update. We also
propose that federated transfer of model scores is sufficient rather than model
weight transfer from device to server. Empirical evaluation with the
Heterogeneity Human Activity Recognition (HHAR) dataset (with four activities
for effective elucidation of results) on Raspberry Pi 2 indicates an average
deterministic accuracy increase of at least ~11.01%, thus demonstrating the
on-device capabilities of our proposed framework.Comment: 8 pages, 5 figures, Machine Learning for Mobile Health Workshop at
NeurIPS 2020. arXiv admin note: substantial text overlap with
arXiv:2011.0320
Incremental Real-Time Personalization in Human Activity Recognition Using Domain Adaptive Batch Normalization
Human Activity Recognition (HAR) from devices like smartphone accelerometers
is a fundamental problem in ubiquitous computing. Machine learning based
recognition models often perform poorly when applied to new users that were not
part of the training data. Previous work has addressed this challenge by
personalizing general recognition models to the unique motion pattern of a new
user in a static batch setting. They require target user data to be available
upfront. The more challenging online setting has received less attention. No
samples from the target user are available in advance, but they arrive
sequentially. Additionally, the motion pattern of users may change over time.
Thus, adapting to new and forgetting old information must be traded off.
Finally, the target user should not have to do any work to use the recognition
system by, say, labeling any activities. Our work addresses all of these
challenges by proposing an unsupervised online domain adaptation algorithm.
Both classification and personalization happen continuously and incrementally
in real time. Our solution works by aligning the feature distributions of all
subjects, be they sources or the target, in hidden neural network layers. To
this end, we normalize the input of a layer with user-specific mean and
variance statistics. During training, these statistics are computed over
user-specific batches. In the online phase, they are estimated incrementally
for any new target user.Comment: Updated version of the preprint from 05/2020 after going through
revision. The content (experiments, results, proposed method) has not
changed. The explanations changed. Certain sentences have been
added/removed/rephrased to be clearer. Removed Figure 3. Added Discussion
section. Renamed "Description of Approach" Section. Added a reference to
related wor
SensiX: A Platform for Collaborative Machine Learning on the Edge
The emergence of multiple sensory devices on or near a human body is
uncovering new dynamics of extreme edge computing. In this, a powerful and
resource-rich edge device such as a smartphone or a Wi-Fi gateway is
transformed into a personal edge, collaborating with multiple devices to offer
remarkable sensory al eapplications, while harnessing the power of locality,
availability, and proximity. Naturally, this transformation pushes us to
rethink how to construct accurate, robust, and efficient sensory systems at
personal edge. For instance, how do we build a reliable activity tracker with
multiple on-body IMU-equipped devices? While the accuracy of sensing models is
improving, their runtime performance still suffers, especially under this
emerging multi-device, personal edge environments. Two prime caveats that
impact their performance are device and data variabilities, contributed by
several runtime factors, including device availability, data quality, and
device placement. To this end, we present SensiX, a personal edge platform that
stays between sensor data and sensing models, and ensures best-effort inference
under any condition while coping with device and data variabilities without
demanding model engineering. SensiX externalises model execution away from
applications, and comprises of two essential functions, a translation operator
for principled mapping of device-to-device data and a quality-aware selection
operator to systematically choose the right execution path as a function of
model accuracy. We report the design and implementation of SensiX and
demonstrate its efficacy in developing motion and audio-based multi-device
sensing systems. Our evaluation shows that SensiX offers a 7-13% increase in
overall accuracy and up to 30% increase across different environment dynamics
at the expense of 3mW power overhead.Comment: 14 pages, 13 firues, 2 table