280,990 research outputs found

    Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition

    Full text link
    The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition, outperforming the traditional signal processing and traditional machine learning approaches. In this work, by performing extensive experimental studies on two human activity recognition datasets, we investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms (such as contrastive learning), and various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.Comment: Seventh International Conference on Internet of Things and Applications (IoT 2023

    Vision Based Activity Recognition Using Machine Learning and Deep Learning Architecture

    Get PDF
    Human Activity recognition, with wide application in fields like video surveillance, sports, human interaction, elderly care has shown great influence in upbringing the standard of life of people. With the constant development of new architecture, models, and an increase in the computational capability of the system, the adoption of machine learning and deep learning for activity recognition has shown great improvement with high performance in recent years. My research goal in this thesis is to design and compare machine learning and deep learning models for activity recognition through videos collected from different media in the field of sports. Human activity recognition (HAR) mostly is to recognize the action performed by a human through the data collected from different sources automatically. Based on the literature review, most data collected for analysis is based on time series data collected through different sensors and video-based data collected through the camera. So firstly, our research analyzes and compare different machine learning and deep learning architecture with sensor-based data collected from an accelerometer of a smartphone place at different position of the human body. Without any hand-crafted feature extraction methods, we found that deep learning architecture outperforms most of the machine learning architecture and the use of multiple sensors has higher accuracy than a dataset collected from a single sensor. Secondly, as collecting data from sensors in real-time is not feasible in all the fields such as sports, we study the activity recognition by using the video dataset. For this, we used two state-of-the-art deep learning architectures previously trained on the big, annotated dataset using transfer learning methods for activity recognition in three different sports-related publicly available datasets. Extending the study to the different activities performed on a single sport, and to avoid the current trend of using special cameras and expensive set up around the court for data collection, we developed our video dataset using sports coverage of basketball games broadcasted through broadcasting media. The detailed analysis and experiments based on different criteria such as range of shots taken, scoring activities is presented for 8 different activities using state-of-art deep learning architecture for video classification

    Inertial Sensor Based Modelling of Human Activity Classes: Feature Extraction and Multi-sensor Data Fusion Using Machine Learning Algorithms

    Get PDF
    Wearable inertial sensors are currently receiving pronounced interest due to applications in unconstrained daily life settings, ambulatory monitoring and pervasive computing systems. This research focuses on human activity recognition problem, in which inputs are multichannel time series signals acquired from a set of body-worn inertial sensors and outputs are automatically classified human activities. A general-purpose framework has been presented for designing and evaluating activity recognition system with six different activities using machine learning algorithms such as support vector machine (SVM) and artificial neural networks (ANN). Several feature selection methods were explored to make the recognition process faster by experimenting on the features extracted from the accelerometer and gyroscope time series data collected from a number of volunteers. In addition, a detailed discussion is presented to explore how different design parameters, for example, the number of features and data fusion from multiple sensor locations - impact on overall recognition performance

    Pairwise classification using combination of statistical descriptors with spectral analysis features for recognizing walking activities

    Get PDF
    The advancement of sensor technology has provided valuable information for evaluating functional abilities in various application domains. Human activity recognition (HAR) has gained high demand from the researchers to undergo their exploration in activity recognition system by utilizing Micro-machine Electromechanical (MEMs) sensor technology. Tri-axial accelerometer sensor is utilized to record various kinds of activities signal placed at selected areas of the human bodies. The presence of high inter-class similarities between two or more different activities is considered as a recent challenge in HAR. The nt of incorrectly classified instances involving various types of walking activities could degrade the average accuracy performance. Hence, pairwise classification learning methods are proposed to tackle the problem of differentiating between very similar activities. Several machine learning classifier models are applied using hold out validation approach to evaluate the proposed method

    A Comparison of Machine Learning and Deep Learning Techniques for Activity Recognition using Mobile Devices

    Get PDF
    We have compared the performance of different machine learning techniques for human activity recognition. Experiments were made using a benchmark dataset where each subject wore a device in the pocket and another on the wrist. The dataset comprises thirteen activities, including physical activities, common postures, working activities and leisure activities. We apply a methodology known as the activity recognition chain, a sequence of steps involving preprocessing, segmentation, feature extraction and classification for traditional machine learning methods; we also tested convolutional deep learning networks that operate on raw data instead of using computed features. Results show that combination of two sensors does not necessarily result in an improved accuracy. We have determined that best results are obtained by the extremely randomized trees approach, operating on precomputed features and on data obtained from the wrist sensor. Deep learning architectures did not produce competitive results with the tested architecture.This research was funded by the Spanish Ministry of Education, Culture and Sports under grant number FPU13/03917

    BERT for Activity Recognition Using Sequences of Skeleton Features and Data Augmentation with GAN

    Get PDF
    Recently, the scientific community has placed great emphasis on the recognition of human activity, especially in the area of health and care for the elderly. There are already practical applications of activity recognition and unusual conditions that use body sensors such as wrist-worn devices or neck pendants. These relatively simple devices may be prone to errors, might be uncomfortable to wear, might be forgotten or not worn, and are unable to detect more subtle conditions such as incorrect postures. Therefore, other proposed methods are based on the use of images and videos to carry out human activity recognition, even in open spaces and with multiple people. However, the resulting increase in the size and complexity involved when using image data requires the use of the most recent advanced machine learning and deep learning techniques. This paper presents an approach based on deep learning with attention to the recognition of activities from multiple frames. Feature extraction is performed by estimating the pose of the human skeleton, and classification is performed using a neural network based on Bidirectional Encoder Representation of Transformers (BERT). This algorithm was trained with the UP-Fall public dataset, generating more balanced artificial data with a Generative Adversarial Neural network (GAN), and evaluated with real data, outperforming the results of other activity recognition methods using the same dataset.This research was supported in part by the Chilean Research and Development Agency (ANID) under Project FONDECYT 1191188, The National University of Distance Education under Projects 2021V/-TAJOV/00 and OPTIVAC 096-034091 2021V/PUNED/008, and the Ministry of Science and Innovation of Spain under Project PID2019-108377RB-C32
    corecore