Human Activity Recognition (HAR) is one of the essential building blocks of
so many applications like security, monitoring, the internet of things and
human-robot interaction. The research community has developed various
methodologies to detect human activity based on various input types. However,
most of the research in the field has been focused on applications other than
human-in-the-centre applications. This paper focused on optimising the input
signals to maximise the HAR performance from wearable sensors. A model based on
Convolutional Neural Networks (CNN) has been proposed and trained on different
signal combinations of three Inertial Measurement Units (IMU) that exhibit the
movements of the dominant hand, leg and chest of the subject. The results
demonstrate k-fold cross-validation accuracy between 99.77 and 99.98% for
signals with the modality of 12 or higher. The performance of lower dimension
signals, except signals containing information from both chest and ankle, was
far inferior, showing between 73 and 85% accuracy