12,751 research outputs found

    Action recognition using the Rf Transform on optical flow images

    Get PDF
    The objective of this paper is the automatic recognition of human actions in video sequences. The use of spatio-temporal features for action recognition has become very popular in recent literature Instead of extracting the spatio-temporal features from the raw video sequence, some authors propose to project the sequence to a single template first. As a contribution we propose the use of several variants of the R transform for projecting the image sequences to templates. The R transform projects the whole sequence to a single image, retaining information concerning movement direction and magnitude. Spatio-temporal features are extracted from the template, they are combined using a bag of words paradigm, and finally fed to a SVM for action classification. The method presented is shown to improve the state-of-art results on the standard Weizmann action datasetPeer ReviewedPostprint (published version

    Temporal segmentation of human actions in video sequences

    Get PDF
    Most of the published works concerning action recognition, usually assume that the action sequences have been previously segmented in time, that is, the action to be recognized starts with the first sequence frame and ends with the last one. However, temporal segmentation of actions in sequences is not an easy task, and is always prone to errors. In this paper, we present a new technique to automatically extract human actions from a video sequence. Our approach presents several contributions. First of all, we use a projection template scheme and find spatio-temporal features and descriptors within the projected surface, rather than extracting them in the whole sequence. For projecting the sequence we use a variant of the R transform, which has never been used before for temporal action segmentation. Instead of projecting the original video sequence, we project its optical flow components, preserving important information about action motion. We test our method on a publicly available action dataset, and the results show that it performs very well segmenting human actions compared with the state-of-the-art methods.Peer ReviewedPostprint (author's final draft

    An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data

    Get PDF
    The current gold standard for human activity recognition (HAR) is based on the use of cameras. However, the poor scalability of camera systems renders them impractical in pursuit of the goal of wider adoption of HAR in mobile computing contexts. Consequently, researchers instead rely on wearable sensors and in particular inertial sensors. A particularly prevalent wearable is the smart watch which due to its integrated inertial and optical sensing capabilities holds great potential for realising better HAR in a non-obtrusive way. This paper seeks to simplify the wearable approach to HAR through determining if the wrist-mounted optical sensor alone typically found in a smartwatch or similar device can be used as a useful source of data for activity recognition. The approach has the potential to eliminate the need for the inertial sensing element which would in turn reduce the cost of and complexity of smartwatches and fitness trackers. This could potentially commoditise the hardware requirements for HAR while retaining the functionality of both heart rate monitoring and activity capture all from a single optical sensor. Our approach relies on the adoption of machine vision for activity recognition based on suitably scaled plots of the optical signals. We take this approach so as to produce classifications that are easily explainable and interpretable by non-technical users. More specifically, images of photoplethysmography signal time series are used to retrain the penultimate layer of a convolutional neural network which has initially been trained on the ImageNet database. We then use the 2048 dimensional features from the penultimate layer as input to a support vector machine. Results from the experiment yielded an average classification accuracy of 92.3%. This result outperforms that of an optical and inertial sensor combined (78%) and illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Scienc

    Sparse Feature Extraction for Activity Detection Using Low-Resolution IR Streams

    Get PDF
    In this paper, we propose an ultra-low-resolution infrared (IR) images based activity recognition method which is suitable for monitoring in elderly care-house and modern smart home. The focus is on the analysis of sequences of IR frames, including single subject doing daily activities. The pixels are considered as independent variables because of the lacking of spatial dependencies between pixels in the ultra-low resolution image. Therefore, our analysis is based on the temporal variation of the pixels in vectorised sequences of several IR frames, which results in a high dimensional feature space and an "n<; <; p" problem. Two different sparse analysis strategies are used and compared: Sparse Discriminant Analysis (SDA) and Sparse Principal Component Analysis (SPCA). The extracted sparse features are tested with four widely used classifiers: Support Vector Machines (SVM), Random Forests (RF), K-Nearest Neighbours (KNN) and Logistic Regression (LR). To prove the availability of the sparse features, we also compare the classification results of the noisy data based sparse features and non-sparse based features respectively. The comparison shows the superiority of sparse methods in terms of noise tolerance and accuracy
    corecore