10 research outputs found
Attention-Based Deep Learning Framework for Human Activity Recognition with User Adaptation
Sensor-based human activity recognition (HAR) requires to predict the action
of a person based on sensor-generated time series data. HAR has attracted major
interest in the past few years, thanks to the large number of applications
enabled by modern ubiquitous computing devices. While several techniques based
on hand-crafted feature engineering have been proposed, the current
state-of-the-art is represented by deep learning architectures that
automatically obtain high level representations and that use recurrent neural
networks (RNNs) to extract temporal dependencies in the input. RNNs have
several limitations, in particular in dealing with long-term dependencies. We
propose a novel deep learning framework, \algname, based on a purely
attention-based mechanism, that overcomes the limitations of the
state-of-the-art. We show that our proposed attention-based architecture is
considerably more powerful than previous approaches, with an average increment,
of more than on the F1 score over the previous best performing model.
Furthermore, we consider the problem of personalizing HAR deep learning models,
which is of great importance in several applications. We propose a simple and
effective transfer-learning based strategy to adapt a model to a specific user,
providing an average increment of on the F1 score on the predictions for
that user. Our extensive experimental evaluation proves the significantly
superior capabilities of our proposed framework over the current
state-of-the-art and the effectiveness of our user adaptation technique.Comment: Accepted for publication on the IEEE Sensors Journa
Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks
With the widespread use of embedded sensing capabilities of mobile devices, there has
been unprecedented development of context-aware solutions. This allows the proliferation of
various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent
personalized services, etc. However, activity context recognition based on multivariate time series
signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance
class problems. This means that recognition models tend to predict classes with the majority number
of samples whilst ignoring classes with the least number of samples, resulting in poor
generalization. To address this problem, we propose augmentation of the time series signals from
inertial sensors with signals from ambient sensing to train deep convolutional neural network
(DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale
invariance of these combined sensor signals. Consequently, we developed a DCNN model using
only inertial sensor signals and then developed another model that combined signals from both
inertial and ambient sensors aiming to investigate the class imbalance problem by improving the
performance of the recognition model. Evaluation and analysis of the proposed system using data
with imbalanced classes show that the system achieved better recognition accuracy when data from
inertial sensors are combined with those from ambient sensors, such as environmental noise level
and illumination, with an overall improvement of 5.3% accuracy
A Light Weight Smartphone Based Human Activity Recognition System with High Accuracy
With the pervasive use of smartphones, which contain numerous sensors, data for modeling human activity is readily available. Human activity recognition is an important area of research because it can be used in context-aware applications. It has significant influence in many other research areas and applications including healthcare, assisted living, personal fitness, and entertainment. There has been a widespread use of machine learning techniques in wearable and smartphone based human activity recognition. Despite being an active area of research for more than a decade, most of the existing approaches require extensive computation to extract feature, train model, and recognize activities. This study presents a computationally efficient smartphone based human activity recognizer, based on dynamical systems and chaos theory. A reconstructed phase space is formed from the accelerometer sensor data using time-delay embedding. A single accelerometer axis is used to reduce memory and computational complexity. A Gaussian mixture model is learned on the reconstructed phase space. A maximum likelihood classifier uses the Gaussian mixture model to classify ten different human activities and a baseline. One public and one collected dataset were used to validate the proposed approach. Data was collected from ten subjects. The public dataset contains data from 30 subjects. Out-of-sample experimental results show that the proposed approach is able to recognize human activities from smartphones’ one-axis raw accelerometer sensor data. The proposed approach achieved 100% accuracy for individual models across all activities and datasets. The proposed research requires 3 to 7 times less amount of data than the existing approaches to classify activities. It also requires 3 to 4 times less amount of time to build reconstructed phase space compare to time and frequency domain features. A comparative evaluation is also presented to compare proposed approach with the state-of-the-art works
Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks
With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train deep convolutional neural network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy
Game Theory Solutions in Sensor-Based Human Activity Recognition: A Review
The Human Activity Recognition (HAR) tasks automatically identify human
activities using the sensor data, which has numerous applications in
healthcare, sports, security, and human-computer interaction. Despite
significant advances in HAR, critical challenges still exist. Game theory has
emerged as a promising solution to address these challenges in machine learning
problems including HAR. However, there is a lack of research work on applying
game theory solutions to the HAR problems. This review paper explores the
potential of game theory as a solution for HAR tasks, and bridges the gap
between game theory and HAR research work by suggesting novel game-theoretic
approaches for HAR problems. The contributions of this work include exploring
how game theory can improve the accuracy and robustness of HAR models,
investigating how game-theoretic concepts can optimize recognition algorithms,
and discussing the game-theoretic approaches against the existing HAR methods.
The objective is to provide insights into the potential of game theory as a
solution for sensor-based HAR, and contribute to develop a more accurate and
efficient recognition system in the future research directions
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
The vast proliferation of sensor devices and Internet of Things enables the
applications of sensor-based activity recognition. However, there exist
substantial challenges that could influence the performance of the recognition
system in practical scenarios. Recently, as deep learning has demonstrated its
effectiveness in many areas, plenty of deep methods have been investigated to
address the challenges in activity recognition. In this study, we present a
survey of the state-of-the-art deep learning methods for sensor-based human
activity recognition. We first introduce the multi-modality of the sensory data
and provide information for public datasets that can be used for evaluation in
different challenge tasks. We then propose a new taxonomy to structure the deep
methods by challenges. Challenges and challenge-related deep methods are
summarized and analyzed to form an overview of the current research progress.
At the end of this work, we discuss the open issues and provide some insights
for future directions