4,685 research outputs found
EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition
Human activity recognition using multiple sensors is a challenging but
promising task in recent decades. In this paper, we propose a deep multimodal
fusion model for activity recognition based on the recently proposed feature
fusion architecture named EmbraceNet. Our model processes each sensor data
independently, combines the features with the EmbraceNet architecture, and
post-processes the fused feature to predict the activity. In addition, we
propose additional processes to boost the performance of our model. We submit
the results obtained from our proposed model to the SHL recognition challenge
with the team name "Yonsei-MCML."Comment: Accepted in HASCA at ACM UbiComp/ISWC 2019, won the 2nd place in the
SHL Recognition Challenge 201
Efficient Personalized Learning for Wearable Health Applications using HyperDimensional Computing
Health monitoring applications increasingly rely on machine learning
techniques to learn end-user physiological and behavioral patterns in everyday
settings. Considering the significant role of wearable devices in monitoring
human body parameters, on-device learning can be utilized to build personalized
models for behavioral and physiological patterns, and provide data privacy for
users at the same time. However, resource constraints on most of these wearable
devices prevent the ability to perform online learning on them. To address this
issue, it is required to rethink the machine learning models from the
algorithmic perspective to be suitable to run on wearable devices.
Hyperdimensional computing (HDC) offers a well-suited on-device learning
solution for resource-constrained devices and provides support for
privacy-preserving personalization. Our HDC-based method offers flexibility,
high efficiency, resilience, and performance while enabling on-device
personalization and privacy protection. We evaluate the efficacy of our
approach using three case studies and show that our system improves the energy
efficiency of training by up to compared with the state-of-the-art
Deep Neural Network (DNN) algorithms while offering a comparable accuracy
Stratified Transfer Learning for Cross-domain Activity Recognition
In activity recognition, it is often expensive and time-consuming to acquire
sufficient activity labels. To solve this problem, transfer learning leverages
the labeled samples from the source domain to annotate the target domain which
has few or none labels. Existing approaches typically consider learning a
global domain shift while ignoring the intra-affinity between classes, which
will hinder the performance of the algorithms. In this paper, we propose a
novel and general cross-domain learning framework that can exploit the
intra-affinity of classes to perform intra-class knowledge transfer. The
proposed framework, referred to as Stratified Transfer Learning (STL), can
dramatically improve the classification accuracy for cross-domain activity
recognition. Specifically, STL first obtains pseudo labels for the target
domain via majority voting technique. Then, it performs intra-class knowledge
transfer iteratively to transform both domains into the same subspaces.
Finally, the labels of target domain are obtained via the second annotation. To
evaluate the performance of STL, we conduct comprehensive experiments on three
large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI
DSADS), which demonstrates that STL significantly outperforms other
state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%).
Furthermore, we extensively investigate the performance of STL across different
degrees of similarities and activity levels between domains. And we also
discuss the potential of STL in other pervasive computing applications to
provide empirical experience for future research.Comment: 10 pages; accepted by IEEE PerCom 2018; full paper. (camera-ready
version
- …