4 research outputs found
Human activity recognition with self-attention
In this paper, a self-attention based neural network architecture to address human activity recognition is proposed. The dataset used was collected using smartphone. The contribution of this paper is using a multi-layer multi-head self-attention neural network architecture for human activity recognition and compared to two strong baseline architectures, which are convolutional neural network (CNN) and long-short term network (LSTM). The dropout rate, positional encoding and scaling factor are also been investigated to find the best model. The results show that proposed model achieves a test accuracy of 91.75%, which is a comparable result when compared to both the baseline models
A Survey on Multi-Resident Activity Recognition in Smart Environments
Human activity recognition (HAR) is a rapidly growing field that utilizes
smart devices, sensors, and algorithms to automatically classify and identify
the actions of individuals within a given environment. These systems have a
wide range of applications, including assisting with caring tasks, increasing
security, and improving energy efficiency. However, there are several
challenges that must be addressed in order to effectively utilize HAR systems
in multi-resident environments. One of the key challenges is accurately
associating sensor observations with the identities of the individuals
involved, which can be particularly difficult when residents are engaging in
complex and collaborative activities. This paper provides a brief overview of
the design and implementation of HAR systems, including a summary of the
various data collection devices and approaches used for human activity
identification. It also reviews previous research on the use of these systems
in multi-resident environments and offers conclusions on the current state of
the art in the field.Comment: 16 pages, to appear in Evolution of Information, Communication and
Computing Systems (EICCS) Book Serie
Human activity recognition of individuals with lower limb amputation in free-living conditions : a pilot study
This pilot study aimed to investigate the implementation of supervised classifiers and a neural network for the recognition of activities carried out by Individuals with Lower Limb Amputation (ILLAs), as well as individuals without gait impairment, in free living conditions. Eight individuals with no gait impairments and four ILLAs wore a thigh-based accelerometer and walked on an improvised route in the vicinity of their homes across a variety of terrains. Various machine learning classifiers were trained and tested for recognition of walking activities. Additional investigations were made regarding the detail of the activity label versus classifier accuracy and whether the classifiers were capable of being trained exclusively on non-impaired individuals' data and could recognize physical activities carried out by ILLAs. At a basic level of label detail, Support Vector Machines (SVM) and Long-Short Term Memory (LSTM) networks were able to acquire 77–78% mean classification accuracy, which fell with increased label detail. Classifiers trained on individuals without gait impairment could not recognize activities carried out by ILLAs. This investigation presents the groundwork for a HAR system capable of recognizing a variety of walking activities, both for individuals with no gait impairments and ILLA