3,169 research outputs found

    Comparing CNN and Human Crafted Features for Human Activity Recognition

    Get PDF
    Deep learning techniques such as Convolutional Neural Networks (CNNs) have shown good results in activity recognition. One of the advantages of using these methods resides in their ability to generate features automatically. This ability greatly simplifies the task of feature extraction that usually requires domain specific knowledge, especially when using big data where data driven approaches can lead to anti-patterns. Despite the advantage of this approach, very little work has been undertaken on analyzing the quality of extracted features, and more specifically on how model architecture and parameters affect the ability of those features to separate activity classes in the final feature space. This work focuses on identifying the optimal parameters for recognition of simple activities applying this approach on both signals from inertial and audio sensors. The paper provides the following contributions: (i) a comparison of automatically extracted CNN features with gold standard Human Crafted Features (HCF) is given, (ii) a comprehensive analysis on how architecture and model parameters affect separation of target classes in the feature space. Results are evaluated using publicly available datasets. In particular, we achieved a 93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with 3 convolutional layers and 32 kernel size, and a 90.5% F-Score on the DCASE 2017 development dataset, simplified for three classes (indoor, outdoor and vehicle), using 2D CNNs with 2 convolutional layers and a 2x2 kernel size

    DeepWalking: Enabling Smartphone-based Walking Speed Estimation Using Deep Learning

    Full text link
    Walking speed estimation is an essential component of mobile apps in various fields such as fitness, transportation, navigation, and health-care. Most existing solutions are focused on specialized medical applications that utilize body-worn motion sensors. These approaches do not serve effectively the general use case of numerous apps where the user holding a smartphone tries to find his or her walking speed solely based on smartphone sensors. However, existing smartphone-based approaches fail to provide acceptable precision for walking speed estimation. This leads to a question: is it possible to achieve comparable speed estimation accuracy using a smartphone over wearable sensor based obtrusive solutions? We find the answer from advanced neural networks. In this paper, we present DeepWalking, the first deep learning-based walking speed estimation scheme for smartphone. A deep convolutional neural network (DCNN) is applied to automatically identify and extract the most effective features from the accelerometer and gyroscope data of smartphone and to train the network model for accurate speed estimation. Experiments are performed with 10 participants using a treadmill. The average root-mean-squared-error (RMSE) of estimated walking speed is 0.16m/s which is comparable to the results obtained by state-of-the-art approaches based on a number of body-worn sensors (i.e., RMSE of 0.11m/s). The results indicate that a smartphone can be a strong tool for walking speed estimation if the sensor data are effectively calibrated and supported by advanced deep learning techniques.Comment: 6 pages, 9 figures, published in IEEE Global Communications Conference (GLOBECOM

    An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data

    Get PDF
    The current gold standard for human activity recognition (HAR) is based on the use of cameras. However, the poor scalability of camera systems renders them impractical in pursuit of the goal of wider adoption of HAR in mobile computing contexts. Consequently, researchers instead rely on wearable sensors and in particular inertial sensors. A particularly prevalent wearable is the smart watch which due to its integrated inertial and optical sensing capabilities holds great potential for realising better HAR in a non-obtrusive way. This paper seeks to simplify the wearable approach to HAR through determining if the wrist-mounted optical sensor alone typically found in a smartwatch or similar device can be used as a useful source of data for activity recognition. The approach has the potential to eliminate the need for the inertial sensing element which would in turn reduce the cost of and complexity of smartwatches and fitness trackers. This could potentially commoditise the hardware requirements for HAR while retaining the functionality of both heart rate monitoring and activity capture all from a single optical sensor. Our approach relies on the adoption of machine vision for activity recognition based on suitably scaled plots of the optical signals. We take this approach so as to produce classifications that are easily explainable and interpretable by non-technical users. More specifically, images of photoplethysmography signal time series are used to retrain the penultimate layer of a convolutional neural network which has initially been trained on the ImageNet database. We then use the 2048 dimensional features from the penultimate layer as input to a support vector machine. Results from the experiment yielded an average classification accuracy of 92.3%. This result outperforms that of an optical and inertial sensor combined (78%) and illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Scienc
    corecore